AI Now’s Data Genesis programme – job opportunity

Facial tracking system, showing gaze direction, emotion scores and demographic profiling

The excellent AI Now institute have announced a fantastic new project titled ‘Data Genesis’ – I’ve copied some details below. Importantly – there are jobs, so – if you think you might fit the bill then apply!

 AI Now Institute has been developing new approaches to study and understand the role of training data in the machine learning field. Key research questions include: What type of information is used as training data? Who generates and collects it and for what purpose? What segments of society does it reflect? Who and what does it exclude? And how does that affect the functioning of AI systems themselves?

The Data Genesis program’s goal is to answer and demystify these questions through three core components:

  • Archiving and analyzing the origin and construction of key datasets that serves as foundations for today’s AI systems;
  • Producing visualizations, maps, and other designs to help crystallize and contextualize what this data is and what it means to communities, practitioners, companies, and policymakers; and
  • Convening experts from across disciplines to help build a field around this topic.

The rapid proliferation of AI into various social and political contexts demands a thorough understanding of the data that these systems are trained on, including the biases and flaws this data may encode. Our Data Genesis program will investigate the complex foundation on which AI is built and will call into question the perception of AI as a magical force that is superior to human judgement.

Check out the jobs associated with the project here.

Bernard Stiegler on the on the notion of information and its limits

Bernard Stiegler being interviewed

I have only just seen this via the De Montfort Media and Communications Research Centre Twitter feed. The above video is Bernard Stiegler’s ‘key note’ (can’t have been a big conference?) at the University of Kent Centre for Critical Though conference on the politics of Simondon’s Modes of Existence of Technical Objects

In engaging with Simondon’s theory (or in his terms ‘notion’) of information, Stiegler reiterates some of the key elements of his Technics and Time in relation to exosomatisation and tertiary retention being the principal tendency of an originary technics that, in turn, has the character of a pharmakon, that, in more recent work, Stiegler articulates in relation to the contemporary epoch (the anthoropocene) as the (thermodynamic style) tension between entropy and negentropy. Stiegler’s argument is, I think, that Simondon misses this pharmacological character of information. In arguing this out, Stiegler riffs on some of the more recent elements of his project (the trilogy of ‘As’) – the anthropocene, attention and automation – which characterise the contemporary tendency towards proletarianisation, a loss of knowledge and capacities to remake the world.

It is interesting to see this weaving together of various elements of his project over the last twenty(+) years both: in relation to his engagement with Simondon’s work (a current minor trend in ‘big’ theory), and: in relation to what seems to me to be a moral philosophical character to Stiegler’s project, in terms of his diagnosis of the anthropocene and a call for a ‘neganthropocene’.

Published> A very public cull – the anatomy of an online issue public

Twitter

I am pleased to share that an article I co-authored with Rebecca Sandover (1st author) and Steve Hinchliffe has finally been published in Geoforum. I would like to congratulate my co-author Rebecca Sandover for this achievement – the article went through a lengthy review process but is now available as an open access article. You can read the whole article, for free, on the Geoforum website. To get a sense of the argument, here is the abstract:

Geographers and other social scientists have for some time been interested in how scientific and environmental controversies emerge and become public or collective issues. Social media are now key platforms through which these issues are publicly raised and through which groups or publics can organise themselves. As media that generate data and traces of networking activity, these platforms also provide an opportunity for scholars to study the character and constitution of those groupings. In this paper we lay out a method for studying these ‘issue publics’: emergent groupings involved in publicising an issue. We focus on the controversy surrounding the state-sanctioned cull of wild badgers in England as a contested means of disease management in cattle. We analyse two overlapping groupings to demonstrate how online issue publics function in a variety of ways – from the ‘echo chambers’ of online sharing of information, to the marshalling of agreements on strategies for action, to more dialogic patterns of debate. We demonstrate the ways in which digital media platforms are themselves performative in the formation of issue publics and that, while this creates issues, we should not retreat into debates around the ‘proper object’ of research but rather engage with the productive complications of mapping social media data into knowledge (Whatmore, 2009). In turn, we argue that online issue publics are not homogeneous and that the lines of heterogeneity are neither simple or to be expected and merit study as a means to understand the suite of processes and novel contexts involved in the emergence of a public.

(More) Gendered imaginings of automata

My Cayla Doll

A few more bits on how automation gets gendered in particular kinds of contexts and settings. In particular, the identification of ‘home’ or certain sorts of intimacy with certain kinds of domestic or caring work that then gets gendered is something that has been increasingly discussed.

Two PhD researchers I am lucky enough to be working with, Paula Crutchlow (Exeter) and Kate Byron (Bristol), have approached some of these issues from different directions. Paula has had to wrangle with this in a number of ways in relation to the Museum of Contemporary Commodities but it was most visible in the shape of Mikayla, the hacked ‘My Friend Cayla Doll’. Kate is doing some deep dives on the sorts of assumptions that are embedded into the doing of AI/machine learning through the practices of designing, programming and so on. They are not, of course, alone. Excellent work by folks like Kate Crawford, Kate Devlin and Gina Neff (below) inform all of our conversations and work.

Here’s a collection of things that may provoke thought… I welcome any further suggestions or comments 🙂

Alexa, does AI have gender?


Alexa is female. Why? As children and adults enthusiastically shout instructions, questions and demands at Alexa, what messages are being reinforced? Professor Neff wonders if this is how we would secretly like to treat women: ‘We are inadvertently reproducing stereotypical behaviour that we wouldn’t want to see,’ she says.

Prof Gina Neff in conversation with Ruth Abrahams, OII.

Predatory Data: Gender Bias in Artificial Intelligence

it has been reported that female-sounding assistive chatbots regularly receive sexually charged messages. It was recently cited that five percent of all interactions with Robin Labs, whose bot platform helps commercial drivers with routes and logistics, is sexually explicit. The fact that the earliest female chatbots were designed to respond to these suggestions
deferentially or with sass was problematic as it normalised sexual harassment.

Vidisha Mishra and Madhulika Srikumar – Predatory Data: Gender Bias in Artificial Intelligence

The Gender of Artificial Intelligence

Chart showing that the gender of artificial intelligence (AI) is not neutral
The gendering, or not, of chatbots, digital assistants and AI movie characters – Tyler Schnoebelen

Consistently representing digital assistants as femalehard-codes a connection between a woman’s voice and subservience.

Stop Giving Digital Assistants Female Voices – Jessica Nordell, The New Republic

“The good robot”

Anki Vector personal robot

A fascinating and very evocative example of the ‘automative imagination’ in action in the form of an advertisement for the “Vector” robot from a company called Anki.

How to narrate or analyse such a robot? Well, there are lots of the almost-archetypical figures of ‘robot’ or automation. The cutesy and non-threatening pseudo-pet that the Vector invites us to assume it is, marks the first. This owes a lot to Wall-E (also, the robots in Batteries Not Included and countless other examples) and the doe-eyed characterisation of the faithful assistant/companion/servant. The second is the all-seeing surveillant machine uploading all your data to “the cloud”. The third is the two examples of quasi-military monsters with shades of “The Terminator”, with a little bit of helpless baby jeopardy for good measure. Finally, the brief nod to HAL 9000, and the flip of the master/slave that it represents, completes a whistle-stop tour of pop culture understandings of ‘robots’, stitched together in order to sell you something.

I assume that the Vector actually still does the kinds of surveillance it is sending up in the advert, but I have no evidence – there is no publicly accessible copy of the terms & conditions for the operation of the robot in your home. However, in a advertorial on ‘Robotics Business Review‘, there is a quote that sort of pushes one to suspect that Vector is yet another device that on the face of it is an ‘assistant’ but is also likely to be hoovering up everything it can about you and your family’s habits in order to sell that data on:

“We don’t want a person to ever turn this robot off,” Palatucci said. “So if the lights go off and it’s on your nightstand and he starts snoring, it’s not going to work. He really needs to use his sensors, his vision system, and his microphone to understand the context of what’s going on, so he knows when you want to interact, and more importantly, when you don’t.”

If we were to be cynical we might ask – why else would it need to be able to do all of this? –>

Anki Vector “Alive and aware”

Regardless, the advert is a useful example of how the bleed from fictional representations of ‘robots’ into contemporary commercial products we can take home – and perhaps even what we might think of as camouflage for the increasingly prevalent ‘extractive‘ business model of in-home surveillance.

HKW Speaking to Racial Conditions Today [video]

racist facial recognition

This video of a panel session at HKW entitled “Speaking to Racial Conditions Today” is well-worth watching.

Follow this link (the video is not available for embedding here).

Inputs, discussions, Mar 15, 2018. With Zimitri Erasmus, Maya Indira Ganesh, Ruth Wilson Gilmore, David Theo Goldberg, Serhat Karakayali, Shahram Khosravi, Françoise Vergès
English original version

New journal article> A very public cull: the anatomy of an online issue public

Twitter

I am pleased to share that a paper that Rebecca Sandover, Steve Hinchliffe and I have had under review for some time has been accepted for publication. The paper comes from our project “Contagion”, which amongst other things examined the ways issue publics form and spread around public controversies – in this case the English badger cull of 2013/14. The research this article presents comes from mixed methods social media research, focused on Twitter. The methods and conversation have, of course, moved on a little in the last two years but I think the paper makes a contribution to how geographers in particular might think about doing social media-based research. I guess this, as a result, also fits into the recent (re)growth of ‘digital geographies’ too.

The article is titled “A very public cull: the anatomy of an online issue public” and will be published in Geoforum in the not-too-distant future. Feel free to get in touch for a pre-print version.

Abstract:

Geographers and other social scientists have for some time been interested in how scientific and environmental controversies emerge and become public or collective issues. Social media are now key platforms through which these issues are publicly raised and through which groups or publics can organise themselves. As media that generate data and traces of networking activity, these platforms also provide an opportunity for scholars to study the character and constitution of those groupings. In this paper we lay out a method for studying these ‘issue publics’: emergent groupings involved in publicising an issue. We focus on the controversy surrounding the state-sanctioned cull of wild badgers in England as a contested means of disease management in cattle. We analyse two overlapping groupings to demonstrate how online issue publics function in a variety of ways – from the ‘echo chambers’ of online sharing of information, to the marshalling of agreements on strategies for action, to more dialogic patterns of debate. We demonstrate the ways in which digital media platforms are themselves performative in the formation of issue publics and that, while this creates issues, we should not retreat into debates around the ‘proper object’ of research but rather engage with the productive complications of mapping social media data into knowledge (Whatmore 2009). In turn, we argue that online issue publics are not homogeneous and that the lines of heterogeneity are neither simple, or to be expected, and merit study as a means to understand the suite of processes and novel contexts involved in the emergence of a public.