John Danaher interview – Robot Sex: Social and Ethical Implications

Gigolo Jane and Gigolo Joe robots in the film A.I.

Via Philosophical Disquisitions.

Through the wonders of the modern technology, myself and Adam Ford sat down for an extended video chat about the new book Robot Sex: Social and Ethical Implications (MIT Press, 2017). You can watch the full thing above or on youtube. Topics covered include:

  • Why did I start writing about this topic?
  • Sex work and technological unemployment
  • Can you have sex with a robot?
  • Is there a case to be made for the use of sex robots?
  • The Campaign Against Sex Robots
  • The possibility of valuable, loving relationships between humans and robots
  • Sexbots as a social experiment

Be sure to check out Adam’s other videos and support his work.

Reblog> Author-Meets-More-or-Less-Friendlies: The Priority of Injustice at AAG 2018

The Priority of Injustice – Clive Barnett

Via Clive. This will be worth going to if you’re going to the AAG in 2018…

Author-Meets-More-or-Less-Friendlies: The Priority of Injustice at AAG 2018

I’m delighted to announce that the very wonderful Michael Samers has arranged an Author Meets Critics session on The Priority of Injustice, my new book (did I mention that?) at the annual meeting of the Association of American Geographers in New Orleans in April. It’s a great panel, with Joshua Barkan (U. of Georgia), Jennifer Fluri (U. of Colorado, Boulder), Leila Harris (UBC), and Kirsi Kallio (University of Tampere) all commenting on the book. The session is sponsored by AAG’s Political Geography Specialty Group and Ethics, Justice, and Human Rights Specialty Group. There’s a nice symmetry about the prospect of discussing the book in New Orleans – the last time the conference was there, in 2003, I presented a paper on theories of radical democracy that was my first post-Culture and Democracy effort at articulating the limits of broadly post-structuralist approaches to that topic, an effort that led eventually to the shape of The Priority of Injustice (yes, I’m a slow thinker).

Reblog > The Priority of Injustice

The Priority of Injustice – Clive Barnett

My colleague Prof. Clive Barnett’s excellent new book is out. He introduces it in a recent blogpost:

The Priority of Injustice

So, finally, the book that I have been writing, on and off, for the last four years, The Priority of Injustice, has been published – or at least, it’s real, since the formal publication date is next month (so I reserve the right to blog further about it as and when). It arrived earlier this week – a rather hectic week, which has oddly meant I have been too busy to experience the strange sense of anti-climax that often accompanies the arrival of the finished form of something that you have been making for so long.

This is, in one sense, my Exeter book – the first thing I did in my very first week here, four years ago, was write the proposal and send it off to prospective publishers, It’s also, though, my Swindon book, a book which attempts to articulate an approach to theorising in an ordinary spirit which has been published just a few weeks after moving away from that very ordinary town where I have lived while writing it.

It’s a beautiful object, with a great cover image, by Helen Burgess (I bought one of her pictures once, in one of those open-house art trail events that you get in places like Bishopston in Bristol, so that’s why I knew of her work; it turns out she is part of a geography-friendly network of artists). And I am honoured and humbled to have the book published in University Georgia Press’s very excellent Geographies of Justice and Social Transformation series.

I’m now faced with the challenge of promoting the book. I’m quite fond of the Coetzee-esque principle that books should have to make their own way in the world without the help of the author; on the other hand, I have some sense of responsibility towards the argument made in the book, a responsibility to help project it into the world. I’ve already realised that it’s not the sort of book that lends itself to an easy press release – ‘THEORY COULD BE THEORISED DIFFERENTLY’, SAYS THEORY-BOY doesn’t really work as a headline, does it?.

Read the full blogpost.

The Economist ‘Babbage’ podcast: “Deus Ex Machina”

Glitched still from the film "Her"

An interesting general (non-academic, non-technical) discussion about what “AI” is, what it means culturally and how it is variously thought about. Interesting to reflect on the way ideas about computation, “algorithms”, “intelligence” and so on play out… something that maybe isn’t discussed enough… I like the way the discussion turns around “thinking” and the suggestion of the word “reckoning”. Worth a listen…

AI Now report

My Cayla Doll

The AI Now Institute have published their second annual report with plenty of interesting things in it. I won’t try and summarise it or offer any analysis (yet). It’s worth a read:

The AI Now Institute, an interdisciplinary research center based at New York University, announced today the publication of its second annual research report. In advance of AI Now’s official launch in November, the 2017 report surveys the current use of AI across core domains, along with the challenges that the rapid introduction of these technologies are presenting. It also provides a set of ten core recommendations to guide future research and accountability mechanisms. The report focuses on key impact areas, including labor and automation, bias and inclusion, rights and liberties, and ethics and governance.

“The field of artificial intelligence is developing rapidly, and promises to help address some of the biggest challenges we face as a society,” said Kate Crawford, cofounder of AI Now and one of the lead authors of the report. “But the reason we founded the AI Now Institute is that we urgently need more research into the real-world implications of the adoption of AI and related technologies in our most sensitive social institutions. People are already being affected by these systems, be it while at school, looking for a job, reading news online, or interacting with the courts. With this report, we’re taking stock of the progress so far and the biggest emerging challenges that can guide our future research on the social implications of AI.”

There’s also a sort of Exec. Summary, a list of “10 Top Recommendations for the AI Field in 2017” on Medium too. Here’s the short version of that:

  1. 1. Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems.
  2. 2. Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design.
  3. 3. After releasing an AI system, companies should continue to monitor its use across different contexts and communities.
  4. 4. More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR.
  5. 5. Develop standards to track the provenance, development, and use of training datasets throughout their life cycle.
  6. 6. Expand AI bias research and mitigation strategies beyond a narrowly technical approach.
  7. 7. Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed.
  8. 8. Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development.
  9. 9. The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power.
  10. 10. Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms.

Which sort of reads, to me, as: “There should be more social scientists involved” 🙂

Monday thought – OOOber?

statue of a man holding his head with his right hand

A thought experiment based upon flippant suggestion:

Object Oriented Ontology is to philosophy what Uber is to tech development.

Both ‘disruptive’, Uber and OOO have both expanded beyond their initial context, which is by several measures ‘success’. Both have become like discursive shortcuts for a particular set of ideas – ‘gig economy’ and ‘automation’ for Uber and ‘speculative realism’ and maybe even ‘metaphysics’ for OOO (and there’s possibly other associations for these terms too).

Neither OOO or Uber came up with the ideas they propound first, they ‘innovated’ from others (not necessarily a problem) and then made grand claims based on that (maybe a problem).

Neither of the groups involved in the development of Uber or OOO has acted especially ethically, although Uber is almost certainly significantly worse (this isn’t a like-for-like comparison). This is one of the other ways in which these words have become pregnant with meaning. Uber has been variously documented as having a problem with misogyny in the workplace and has also teetered on the edge of legality through ‘greyball’. Some of the proponents of OOO have been accused of bullying graduate students online and at conferences (I recognise gossip can be pernicious but I’ve heard this from several unrelated sources). It has also been suggested some of these folks are garnering a reputation for being somewhat ‘macho’ in attitude – it probably doesn’t help that the lead figures are all male, that they write lots of earnest manifestos or that they succumb to profiles in newspapers that call them “philosopher prophet“. Of course, neither OOO or Uber are unique in this, similar observations/ accusations have been made of antecedent tech firms and philosophical movements, one need only look to TV programmes like “Silicon Valley” or open up the ‘theory boy‘ can of worms.

Finally, there is also a sense that the success both Uber and OOO are easily co-opted into these (pejorative) narratives. There are grounds for this, well – certainly for Uber, but the visibility that success brings makes it easier to tell these stories. I have no doubt that such alleged behaviour is not limited to those involved in Uber or OOO. Likewise, those categories may be contested and we shouldn’t tar everyone who works for a company or does a particular branch of theory with the same brush. Goodness knows there are plenty of “tech bros” and, for want of a better term, “theory bros” outside of Uber and OOO.

Such a critique, however flippant, can come across as a bit pompous or sly. I cannot stand outside this, I am, to a degree, complicit. For example, the citational practices used by “theory bros”, cartel-like, are easy to slip into – many of us have succumbed. To recognise stupidity, as both Ronell and Stiegler point out, is to recognise my own stupidity – the lesson, perhaps the ‘ethic’, is to pass through it towards knowledge. Not the reproduction of the same knowledge (that’s patriarchy), and not always, I think, difference for it’s own sake (isn’t that what the “tech bros” call “disruption”? and doesn’t that always require being in a privileged position?) but perhaps a thoughtful defiance – not ‘laughing along’. This could mean more “no’s” (following Sara Ahmed). Maybe even something like a NO movement – “No Ontology”, at least the kinds of ontology that get used as authority in the kinds of theory top trumps that get played by some of us in the social sciences and humanities… of course this isn’t a novel suggestion either, it’s somewhat akin to feminist standpoint theory.

Perhaps I’m being unkind to OOO and those who do/use it. Success breeds contempt and all that… but the thought experiment was interesting to run through, in my own ham-fisted way…

Scaring you into ‘digital safety’

I’ve been thinking about the adverts from Barclays about ‘digital safety’ in relation to some of the themes of the module about technology that I run. In particular, about the sense of risk and threat that sometimes gets articulated about digital media and how this maybe carries with it other kinds of narrative about technology, like versions of determinism for instance. The topics of the videos are, of course, grounded in things that do happen – people do get scammed and it can be horrendous. Nevertheless, from an ‘academic’ standpoint, it’s interesting to ‘read’ these videos as particular articulations of perceived norms around safety/risk and responsibility, for whom, and why in relation to ‘digital’ media… I could write more but I’ll just post a few of the videos for now…

Ellen Ullman’s Life in Code

Interesting account of author of Close to the Machine Ellen Ullman’s most recent book Life in Code, which sounds fantastic and very much worth a read (just like Close to the Machine), and something of its context. From the NYT:

LIFE IN CODE

A Personal History of Technology

By Ellen Ullman

Illustrated. 306 pp. Farrar, Straus & Giroux.

As milestone years go, 1997 was a pretty good one. The computers may have been mostly beige and balky, but certain developments were destined to pay off down the road. Steve Jobs returned to a floundering Apple after years of corporate exile, IBM’s Deep Blue computer finally nailed the world-champion chess master Garry Kasparov with a checkmate, and a couple of Stanford students registered the domain name for a new website called google.com. Nineteen ninety-seven also happened to be the year that the software engineer Ellen Ullman published “Close to the Machine: Technophilia and Its Discontents,” her first book about working as a programmer in a massively male-dominated field.

That slender volume became a classic of 20th-century digital culture literature and was critically praised for its sharp look at the industry, presented in a literary voice that ignored the biz-whiz braggadocio of the early dot-com era. The book had obvious appeal to technically inclined women — desktop-support people like myself then, computer-science majors, admirers of Donna J. Haraway’s feminist cyborg manifesto, those finding work in the newish world of website building — and served as a reminder that someone had already been through it all and took notes for the future.

Then Ullman retired as a programmer, logging out to go write two intense character-driven thriller novels and the occasional nonfiction essay. The digital economy bounced back after the Epic Fail of 2000 and two decades later, those techno-seeds planted back in 1997 have bloomed. Just look at all those smartphones, constantly buzzing with news alerts and calendar notifications as we tell the virtual assistant to find us Google Maps directions to the new rice-bowl place. What would Ullman think of all this? We can now find out, as she’s written a new book, “Life in Code: A Personal History of Technology,” which manages to feel like both a prequel and a sequel to her first book.

Read the rest on the NYT website.

Reblog> Whither the Creative City? The Comeuppance of Richard Florida

Nice post from Jason Luger:

Whither the Creative City? The Comeuppance of Richard Florida

Talent, Technology, and Tolerance, said Florida (2002), were the pre-conditions for a successful urban economy. Florida’s ‘creative class’ theory, much copied, emulated and critically maligned, delineated urban regions with ‘talent’ (PhDs); ‘technology’ (things like patents granted); and ‘tolerance’ (represented by a rather arbitrary ‘gay index’ of same-sex households in census data).

This combination, according to Florida’s interpretation of his data, indicated urban creative ‘winners’ versus urban ‘losers’: blue collar cities with more traditional economies and traditional worldviews. Creative people want to be around other creative people, wrote Florida, so failing to provide an ideal urban environment for them will result in their ‘flight’ (2005) and the loss of all the benefits of the creative economy. Therefore, to win in the ‘new economy’ (Harvey, 1989), cities need to compete for, and win the affections of, the ‘creative class’. Or so Florida then-believed.

Read the full post.