Call for papers: Geography of/with A.I

Still from the video for All is Love by Bjork

I very much welcome any submissions to this call for papers for the proposed session for the RGS-IBG annual conference (in London in late-August) outlined below. I also welcome anyone getting in touch to talk about possible papers or ideas for other sorts of interventions – please do get in touch.

Call for papers:

We are variously being invited to believe that (mostly Global North, Western) societies are in the cusp, or early stages, of another industrial revolution led by “Artificial Intelligence” – as many popular books (e.g. Brynjolfsson and McAfee 2014) and reports from governments and management consultancies alike will attest (e.g. PWC 2018, UK POST 2016). The goal of this session is to bring together a discussion explicitly focusing on the ways in which geographers already study (with) ‘Artificial Intelligence’ and to, perhaps, outline ways in which we might contribute to wider debates concerning ‘AI’. 

There is widespread, inter-disciplinary analysis of ‘AI’ from a variety of perspective, from embedded systematic bias (Eubanks 2017, Noble 2018) to the kinds of under-examined rationales and work through which such systems emerge (e.g. Adam 1998, Collins 1993) and further to the sorts of ethical-moral frameworks that we should apply to such technologies (Gunkel 2012, Vallor 2016). In similar, if somewhat divergent ways, geographers have variously been interested in the kinds of (apparently) autonomous algorithms or sociotechnical systems are integrated into decision-making processes (e.g. Amoore 2013, Kwan 2016); encounters with apparently autonomous ‘bots’ (e.g. Cockayne et al. 2017); the integration of AI techniques into spatial analysis (e.g. Openshaw & Openshaw 1997); and the processing of ‘big’ data in order to discern things about, or control, people (e.g. Leszczynski 2015). These conversations appear, in conference proceedings and academic outputs, to rarely converge, nevertheless there are many ways in which geographical research does and can continue to contribute to these contemporary concerns.

The invitation of this session is to contribute papers that make explicit the ways in which geographers are (already) contributing to research on and with ‘AI’, to identify research questions that are (perhaps) uniquely geographical in relation to AI, and to thereby advance wider inter-disciplinary debates concerning ‘AI’.

Examples of topics might include (but are certainly not limited to):

  • A.I and governance
  • A.I and intimacy
  • Artificially intelligent mobilities
  • Autonomy, agency and the ethics of A.I
  • Autonomous weapons systems
  • Boosterism and ‘A.I’
  • Feminist and intersectional interventions in/with A.I
  • Gender, race and A.I
  • Labour, work and A.I
  • Machine learning and cognitive work
  • Playful A.I
  • Science fiction, spatial imaginations and A.I
  • Surveillance and A.I

Please send submissions (titles, abstracts (250 words) and author details) to: Sam Kinsley by 31st January 2019.

Bernard Stiegler on the on the notion of information and its limits

Bernard Stiegler being interviewed

I have only just seen this via the De Montfort Media and Communications Research Centre Twitter feed. The above video is Bernard Stiegler’s ‘key note’ (can’t have been a big conference?) at the University of Kent Centre for Critical Though conference on the politics of Simondon’s Modes of Existence of Technical Objects

In engaging with Simondon’s theory (or in his terms ‘notion’) of information, Stiegler reiterates some of the key elements of his Technics and Time in relation to exosomatisation and tertiary retention being the principal tendency of an originary technics that, in turn, has the character of a pharmakon, that, in more recent work, Stiegler articulates in relation to the contemporary epoch (the anthoropocene) as the (thermodynamic style) tension between entropy and negentropy. Stiegler’s argument is, I think, that Simondon misses this pharmacological character of information. In arguing this out, Stiegler riffs on some of the more recent elements of his project (the trilogy of ‘As’) – the anthropocene, attention and automation – which characterise the contemporary tendency towards proletarianisation, a loss of knowledge and capacities to remake the world.

It is interesting to see this weaving together of various elements of his project over the last twenty(+) years both: in relation to his engagement with Simondon’s work (a current minor trend in ‘big’ theory), and: in relation to what seems to me to be a moral philosophical character to Stiegler’s project, in terms of his diagnosis of the anthropocene and a call for a ‘neganthropocene’.

HKW Speaking to Racial Conditions Today [video]

racist facial recognition

This video of a panel session at HKW entitled “Speaking to Racial Conditions Today” is well-worth watching.

Follow this link (the video is not available for embedding here).

Inputs, discussions, Mar 15, 2018. With Zimitri Erasmus, Maya Indira Ganesh, Ruth Wilson Gilmore, David Theo Goldberg, Serhat Karakayali, Shahram Khosravi, Françoise Vergès
English original version

“Decolonizing Technologies, Reprogramming Education” HASTAC 2019 call

Louise Bourgeois work of art

This looks interesting. Read the full call here.

Call for Proposals

On 16-18 May 2019, the Humanities, Arts, Science, and Technology Alliance and Collaboratory (HASTAC), in partnership with the Institute for Critical Indigenous Studies at the University of British Columbia (UBC) and the Department of English at the University of Victoria (UVic), will be guests on the traditional, ancestral, and unceded territory of the h?n?q??min??m?-speaking Musqueam (x?m??k??y??m) people, facilitating a conference about decolonizing technologies and reprogramming education.

Deadline for proposals is Monday 15 October 2018.

Submit a proposal. Please note: This link will take you to a new website (HASTAC’s installation of ConfTool), where you will create a new user account to submit your proposal. Proposals may be submitted in EnglishFrench, or Spanish.


Conference Theme

The conference will hold up and support Indigenous scholars and knowledges, centering work by Indigenous women and women of colour. It will engage how technologies are, can be, and have been decolonized. How, for instance, are extraction technologies repurposed for resurgence? Or, echoing Ellen Cushman, how do we decolonize digital archives? Equally important, how do decolonial and anti-colonial practices shape technologies and education? How, following Kimberlé Crenshaw, are such practices intersectional? How do they correspond with what Grace Dillon calls Indigenous Futurisms? And how do they foster what Eve Tuck and Wayne Yang describe as an ethic of incommensurability, unsettling not only assumptions of innocence but also discourses of reconciliation?

With these investments, HASTAC 2019: “Decolonizing Technologies, Reprogramming Education” invites submissions addressing topics such as:

  • Indigenous new media and infrastructures,
  • Self-determination and data sovereignty, accountability, and consent,
  • Racist data and biased algorithms,
  • Land-based pedagogy and practices,
  • Art, history, and theory as decolonial or anti-colonial practices,
  • Decolonizing the classroom or university,
  • Decolonial or anti-colonial approaches involving intersectional feminist, trans-feminist, critical race, and queer research methods,
  • The roles of technologies and education in the reclamation of language, land, and water,
  • Decolonial or anti-colonial approaches to technologies and education around the world,
  • Everyday and radical resistance to dispossession, extraction, and appropriation,
  • Decolonial or anti-colonial design, engineering, and computing,
  • Alternatives to settler heteropatriarchy and institutionalized ableism in education,
  • Unsettling or defying settler geopolitics and frontiers,
  • Trans-Indigenous activism, networks, and knowledges, and
  • Indigenous resurgence through technologies and education.

Some more A.I. links

Twiki the robot from Buck Rogers

This post contains some tabs I have had open in my browser for a while that I’m pasting here both to save them in a place I may remember to look and to share them with others that might find them of interest. I’m afraid I don’t have time, at present, to offer any cogent commentary or analysis – just simply to share…

Untold AI - Christopher NoesselUntold A.I. – “What stories are we not telling ourselves about A.I?”, Christopher Noessel: An interesting attempt to look at popular, sci-fi stories of A.I. and compare them to contemporary A.I. research manifestos and look at where we might not be telling ourselves stories about the things people are actually trying to do.

 

The ethics of crashes with self?driving cars: A roadmapSven Nyholm: A two-part series of papers [one and two ($$) / one and two (open)] published in Philosophy Compass concerning how to think through the ethical issues associated with self-driving cars. Nyholm recently talked about this with John Danaher on his podcast.

Cognitive Bias CodexWEF on the Toronto Declaration and the “cognitive bias codex”: A post on the World Economic Forum’s website about “The Toronto Declaration on Machine Learning” on guiding principles for protecting human rights in relation to automated systems. As part of the post they link to a nice diagram about cognitive bias – the ‘cognitive bias codex‘.

RSA public engagement with AI reportRSA report on public engagement with AI: “Our new report, launched today, argues that the public needs to be engaged early and more deeply in the use of AI if it is to be ethical. One reason why is because there is a real risk that if people feel like decisions about how technology is used are increasingly beyond their control, they may resist innovation, even if this means they could lose out on benefits.”

artificial unintelligence - broussardArtificial Unintelligence, Meredith Broussard: “In Artificial Unintelligence, Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners—that we have stopped demanding that our technology actually work.”

Data-driven discrimination: a new challenge for civil society: A blogpost on the LSE ‘Impact of Soc. Sci.’ blog: “Having recently published a report on automated discrimination in data-driven systems, J?drzej Niklas and Seeta Peña Gangadharan explain how algorithms discriminate, why this raises concerns for civil society organisations across Europe, and what resources and support are needed by digital rights advocates and anti-discrimination groups in order to combat this problem.”

‘AI and the future of work’ – talk by Phoebe Moore: Interesting talk transcript with links to videos. Snippet: “Human resource and management practices involving AI have introduced the use of big data to make judgements to eliminate the supposed “people problem”. However, the ethical and moral questions this raises must be addressed, where the possibilities for discrimination and labour market exclusion are real. People’s autonomy must not be forgotten.”

Government responds to report by Lords Select Committee on Artificial Intelligence: “The Select Committee on Artificial Intelligence receives the Government response to the report: AI in the UK: Ready, willing and able?, published on 16 April 2018.”

How a Pioneer of Machine Learning Became One of Its Sharpest Critics, Kevin Hartnett – The Atlantic: “Judea Pearl helped artificial intelligence gain a strong grasp on probability, but laments that it still can’t compute cause and effect.”

Unfathomable Scale – moderating social media platforms

Facebook logo reflected in a human eye

There’s a really nice piece by Tarleton Gillespie in Issue 04 of Logic themed on “scale” that concerns the scale of social media platforms and how we might understand the qualitative as well as quantitative shifts that happen when things change in scale.

The Scale is just unfathomble

But the question of scale is more than just the sheer number of users. Social media platforms are not just big; at this scale, they become fundamentally different than they once were. They are qualitatively more complex. While these platforms may speak of their online “community,” singular, at a billion active users there can be no such thing. Platforms must manage multiple and shifting communities, across multiple nations and cultures and religions, each participating for different reasons, often with incommensurable values and aims. And communities do not independently coexist on a platform. Rather, they overlap and intermingle—by proximity, and by design.

The huge scale of the platforms has robbed anyone who is at all acquainted with the torrent of reports coming in of the illusion that there was any such thing as a unique case… On any sufficiently large social network everything you could possibly imagine happens every week, right? So there are no hypothetical situations, and there are no cases that are different or really edgy. There’s no such thing as a true edge case. There’s just more and less frequent cases, all of which happen all the time.

No matter how they handle content moderation, what their politics and premises are, or what tactics they choose, platforms must work at an impersonal scale: the scale of data. Platforms must treat users as data points, subpopulations, and statistics, and their interventions must be semi-automated so as to keep up with the relentless pace of both violations and complaints. This is not customer service or community management but logistics—where concerns must be addressed not individually, but procedurally.

However, the user experiences moderation very differently. Even if a user knows, intellectually, that moderation is an industrial-sized effort, it feels like it happens on an intimate scale. “This is happening to me; I am under attack; I feel unsafe. Why won’t someone do something about this?” Or, “That’s my post you deleted; my account you suspended. What did I do that was so wrong?”

Reblog> Internet Addiction watch “Are We All Addicts Now? Video

Twitter

Via Tony Sampson. Looks interesting >

This topic has been getting a lot of TV/Press coverage here in the UK.Here’s a video of a symposium discussing artistic resistance, critical theory strategies to ‘internet addiction’ and the book Are We All Addicts Now? Convened at Central St Martins, London on 7th Nov 2017. Introduced by Ruth Catlow with talks by Katriona Beales, Feral Practice, Emily Rosamond and myself…

@KatrionaBeales @FeralPractice @TonyDSpamson @EmilyRosamond & @furtherfield