‘New geographies of automation?’ at the RGS-IBG conference

Industrial factory robot arms

All of a sudden the summer is nearly over, apparently, and the annual conference of the Royal Geographical Society with the Institute of British Geographers is fast approaching, this year in Cardiff.

I am convening a double session on the theme of ‘New geographies of automation?’, with two sessions of papers by some fantastic colleagues that promise to be really interesting. I am really pleased to have this opportunity to invite colleagues to collectively bring their work into conversation around a theme that is not only a contemporary topic in academic work but also, significantly, a renewed topic of interest in the wider public.

There are two halves of the session, broadly themed around ‘autonomy’ and ‘spacings’. Please find below the abstracts for the session.

Details: Sessions 92 & 123 (in slots 3 & 4 – 14:40-16:20 & 16:50-18:30) | Bates Building, Lecture Theatre 1.4

This information is also accessible, with all of the details of venue etc., on the RGS-IBG conference website: session 1 ‘autonomy’ and session 2 ‘spacings’.

New Geographies of Automation? (1): Autonomy

1.1 An Automative Imagination

Samuel Kinsley, University of Exeter

This paper sets out to review some of the key ways in which automation gets imagined – the sorts of cultural, economic and social forms of imagination that are drawn upon and generated when discussing how automation works and the kinds of future that may come as a result. The aim here is not to validate/invalidate particular narratives of automation – but instead to think about how they are produced and what they tell us about how we tell stories about what it means to be ‘human’, who/what has agency and what this may mean for how we think politically and spatially. To do this the concept of an ‘automative imagination’ is proposed as a means of articulating these different, sometimes competing – sometimes complementary, orientations towards automation.

 

1.2 The Future of Work: Feminist Geographical Engagements

Julie MacLeavy (Geographical Sciences, University of Bristol)

This paper considers the particular pertinence of feminist geographical scholarship to debates on the ‘future of work’. Drawing inspiration from Linda McDowell’s arguments that economic theories of epochal change rest on the problematic premise that economic and labour market changes are gender-neutral, it highlights the questions that are emerging from feminist economic geography research and commentary on the reorganisation of work, workers’ lives and labour markets. From this, the paper explores how feminist and anti-racist politics connect with the imagination of a ‘post-work’ world in which technological advancement is used to enable more equitable ways of practice (rather than more negative effects such as the intensification of work lifestyles). Political responses to the critical challenges that confront workers in the present moment of transformation are then examined, including calls for Universal Basic Income, which has the potential to reshape the landscape of labour-capital relations.

 

1.3 Narrating the relationship between automation and the changing geography of digital work

Daniel Cockayne, Geography and Environmental Management, University of Waterloo

Popular narratives about the relationship between automation and work often make a straightforward causal link between technological change and deskilling, job loss, or increased demand for jobs. Technological change – today, most commonly, automation and AI – is often scripted as threatening the integrity of labor, unionization, and traditional working practices or as creating more demand for jobs, in which the assumption is the more jobs the better. These narratives elide a close examination of the politics of work that include considerations of domestic and international racialized and gendered divisions of labor. Whether positive or negative, the supposed inevitability of technological transition positions labor as a passive victim of these changes, while diverting attention away from the workings of international financialized capital. Yet when juxtaposed against empirical data, straightforward cause and effect narratives become more complex. The unemployment rate in North America has been the lowest in 40 years (4.1% in the USA and 5.7% in Canada), which troubles the relationship between automation and job loss. Yet, though often touted by publications like The Economist as a marker of national economic well-being, unemployment rates ignore the kinds of work people are doing, effacing the qualitative changes in work practices over time. I examine these tropes and their relationship to qualitative changes in work practices, to argue that the link between technological change and the increasing precaratization of work is more primary than the diversionary relationship between technological change and job loss and gain or deskilling. 

 

1.4 Sensing automation

David Bissell, University of Melbourne

Processes of industrial automation are intensifying in many sectors of the economy through the development of AI and robotics. Conventional accounts of industrial automation stress the economic imperatives to increase economic profitability and safety. Yet such coherent snapped-to-grid understandings risk short-circuiting the complexity and richness of the very processes and events that compose automation. ­­­This paper draws from and reflects through a series of encounters with workers engaged in the increasingly automated mining sector in Australia. Rather than thinking these encounters solely through their representational dimensions with an aim to building a coherent image of what automation is, this paper is an attempt at writing how automation becomes differently disclosed through the aesthetic dimensions of encounters. It acknowledges how automation is always caught up in multiple affective and symbolic ecologies which create new depths of association. Developing post-phenomenological thought in cultural geography, this paper articulates some of the political and ethical stakes for admitting ambiguity, incoherence and confusion as qualities of our relations with technological change.

 

1.5 Technological Sovereignty, Post-Human Subjectivity, and the Production of the Digital-Urban Commons

Casey Lynch (School of Geography and Development, University of Arizona)

 As cities become increasingly monitored, planned, and controlled by the proliferation of digital technologies, urban geographers have sought to understand the role of software, big data, and connected infrastructures in producing urban space (French and Thrift 2002; Dodge, Kitchin, and Zook, 2009). Reflections on the “automatic production of space” have raised questions about the role and limitations of “human” agency in urban space (Rose 2017) and the possibilities for urban democracy. Yet, this literature largely considers the proliferation of digital infrastructures within the dominant capitalist, smart-city model, with few discussions of the possibilities for more radically democratic techno-urban projects. Engaging these debates, this paper considers alternative models of the techno-social production of urban space based around the collective production and management of a common digital-urban infrastructure. The paper reflects on the notion of “technological sovereignty” and the case of Guifinet, the world’s largest “community wireless network” covering much of Catalonia.  The paper highlights the way its decentralized, DIY mode of producing and maintaining digital urban infrastructure points to the possibilities for more radically democratic models of co-production in which urban space, technological infrastructures, and subjectivities are continually reshaped in relation. Through this, the paper seeks to contribute to broader discussions about the digitalization of urban space and the possibilities for a radical techno-politics.  

New Geographies of Automation? (2): Spacings

2.1 The urbanisation of robotics and automated systems – a research agenda
Andy Lockhart* (a.m.lockhart@sheffield.ac.uk), Aidan While* (a.h.while@sheffield.ac.uk), Simon Marvin (s.marvin@sheffield.ac.uk), Mateja Kovacic (m.kovacic@sheffield.ac.uk), Desiree Fields (d.fields@sheffield.ac.uk) and Rachel Macrorie (r.m.macrorie@sheffield.ac.uk) (Urban Institute, University of Sheffield)
*Attending authors
Pronouncements of a ‘fourth industrial revolution’ or ‘second machine age’ have stimulated significant public and academic interest in the implications of accelerating automation. The potential consequences for work and employment have dominated many debates, yet advances in robotics and automated systems (RAS) will have profound and geographically uneven ramifications far beyond the realm of labour. We argue that the urban is already being configured as a key site of application and experimentation with RAS technologies. This is unfolding across a range of domains, from the development of autonomous vehicles and robotic delivery systems, to the growing use of drone surveillance and predictive policing, to the rollout of novel assistive healthcare technologies and infrastructures. These processes and the logics underpinning them will significantly shape urban restructuring and new geographies of automation in the coming years. However, while there is growing research interest in particular domains, there remains little work to date which takes a more systemic view. In this paper we do three things, which look to address this gap and constitute the contours of a new urban research agenda. First, we sketch a synoptic view of the urbanisation of RAS, identifying what is new, what is being enabled as a result and what should concern critical scholars, policymakers and the wider public in debates about automation. Second, we map out the multiple and sometimes conflicting rationalities at play in the urbanisation of RAS, which have the potential to generate radically different urban futures, and may address or exacerbate existing socio-spatial inequalities and injustices. Third, and relatedly, we pose a series of questions for urban scholars and geographers, which constitute the basis for an urgent new programme of research and intervention.

 

2.2 Translating the signals: Utopia as a method for interrogating developments in autonomous mobility

Thomas Klinger1, 2
Brendan Doody2
Debbie Hopkins2
Tim Schwanen2
1. Institute of Human Geography, Goethe-University Frankfurt am Main
2. School of Geography and the Environment, University of Oxford

Connected and autonomous vehicles (CAVs) are often presented as technological ‘solutions’ to problems of road safety, congestion, fuel economy and the cost of transporting people, goods and services. In these dominant techno-economic narratives ‘non-technical’ factors such as public acceptance, legal and regulatory frameworks, cost and investment in testing, research and supporting infrastructure are the main ‘barriers’ to the otherwise steady roll-out of CAVs. Drawing on an empirical case study of traffic signalling, we trace the implications that advances in vehicle autonomy may have for such mundane and taken-for-granted infrastructure. We employ the three modes of analysis associated with Levitas’ (2013) ‘utopia as a method’. Starting with the architectural mode we identify the components, actors and visions underpinning ‘autonomobility’. The archaeological mode is then used to unpack the assumptions, contradictions and possible unintended effects that CAVs may have for societies. In the ontological mode we speculate upon the types of human and non-human subjectivities and agencies implied by alleged futures of autonomous mobility. Through this process we demonstrate that techno-economic accounts overemphasise the likely scale, benefits and impacts these advances may have for societies. In particular, they overlook how existing automobile-dependent mobility systems are the outcome of complex assemblages of social and technical elements (e.g., cars, car-drivers, roads, petroleum supplies, novel technologies and symbolic meanings) which have become interlinked in systemic and path-dependent ways over time. We conclude that utopia as method may provide one approach by which geographers can interrogate and opening up alarmist/boosterish visions of autonomobility and automation.

 

2.3 Automating the laboratory? Folding securities of malware
Andrew Dwyer, University of Oxford
andrew.dwyer@cybersecurity.ox.ac.uk

Folding, weaving, and stitching is crucial to contemporary analyses of malicious software; generated and maintained through the spaces of the malware analysis laboratory. Technologies entangle (past) human analysis, action, and decision into ‘static’ and ‘contextual’ detections that we depend on today. A large growth in suspect software to draw decisions on maliciousness have driven a movement into (seemingly omnipresent) machine learning. Yet this is not the first intermingling of human and technology in malware analysis. It draws on a history of automation, enabling interactions to ‘read’ code in stasis; build knowledges in more-than-human collectives; allow ‘play’ through a monitoring of behaviours in ‘sandboxed’ environments; and draw on big data to develop senses of heuristic reputation scoring.

Though we can draw on past automation to explore how security is folded, made known, rendered as something knowable: contemporary machine learning performs something different. Drawing on Louise Amoore’s recent work on the ethics of the algorithm, this paper queries how points of decision are now more-than-human. Automation has always extended the human, led to loops, and driven alternative ways of living. Yet the contours, the multiple dimensions of the neural net, produce the malware ‘unknown’ that have become the narrative of the endpoint industry. This paper offers a history of the automation of malware analysis from static and contextual detection, to ask how automation is changing how cyberspace becomes secured and made governable; and how automation is not something to be feared, but tempered with the opportunities and challenges of our current epoch.

 

2.4 Robots and resistance: more-than-human geographies of automation on UK dairy farms

Chris Bear (Cardiff University; bearck@cardiff.ac.uk)
Lewis Holloway (University of Hull; l.holloway@hull.ac.uk)

This paper examines the automation of milking on UK dairy farms to explore how resistance develops in emerging human-animal-technology relations. Agricultural mechanisation has long been celebrated for its potential to increase the efficiency of production. Automation is often characterised as continuing this trajectory; proponents point to the potential for greater accuracy, the removal of less appealing work, the reduction of risks posed by unreliable labour, and the removal of labour costs. However, agricultural mechanisation has never been received wholly uncritically; studies refer to practices of resistance that have developed due to fears around (for instance) impacts on rural employment, landscapes, ecologies and traditional knowledge practices. Drawing on interviews with farmers, observational work on farms and analysis of promotional material, this paper examines resistant relations that emerge around the introduction of Automated Milking Systems (AMS) on UK dairy farms. While much previous work on resistance to agricultural technologies has pitted humans against machines, we follow Foucault in arguing that resistance can be heterogeneous and directionally ambiguous, emerging through ‘the capillary processes of counter-conduct’ (Holloway and Morris 2012). These capillary processes can have complex geographies and emerge through more-than-human relations. Where similar conceptualisations have been developed previously, technologies continue to appear rather inert – they are often the tools by which humans attempt to exert influence, rather than things which can themselves ‘object’ (Latour 2000), or which are co-produced by other nonhumans rather than simply imposed or applied by humans. We begin, therefore, to develop a more holistic approach to the geographies of more-than-human resistance in the context of automation.

 

2.5 Fly-by-Wire: The Ironies of Automation and the Space-Times of Decision-Making

Sam Hind (University of Siegen; hind@locatingmedia.uni-siegen.de)

This paper presents a ‘prehistory’ (Hu 2015) of automobile automation, by focusing on ‘fly-by-wire’ control systems in aircraft. Fly-by-wire systems, commonly referred to as ‘autopilots’ work by translating human control gestures into component movements, via digital soft/hardware. These differ historically from mechanical systems in which pilots have direct steering control through a ‘yoke’ to the physical components of an aircraft (ailerons etc.), via metal rods or wires. Since the launch of the first commercial aircraft with fly-by-wire in 1988, questions regarding the ‘ironies’ or ‘paradoxes’ of automation (Bainbridge 1983) have continued to be posed. I look at the occurrence of ‘mode confusion’ in cockpits to tease out one of these paradoxes; using automation in the aviation industry as a heuristic lens to analyze automation of the automobile. I then proceed by detailing a scoping study undertaken at the Geneva Motor Show in March this year, in which Nissan showcased an autonomous vehicle system. Unlike other manufacturers, Nissan is pitching the need for remote human support when vehicles encounter unexpected situations; further complicating and re-distributing navigational labour in, and throughout, the driving-machine. I will argue that whilst such developments plan to radically alter the ‘space-times of decision-making’ (McCormack and Schwanen 2011) in the future autonomous vehicle, they also exhibit clear ironies or paradoxes found similarly, and still fiercely discussed, in the aviation industry and with regards to fly-by-wire systems. It is wise, therefore, to consider how these debates have played out – and with what consequences.

Machine Learning ‘like alchemy’, not electricity

Holly from the UK TV programme Red Dwarf

In this Neural Information Processing System conference talk (2017), marking receiving the ‘test of time’ award, Ali Rahimi discusses the status of rigour in the field of machine learning. In response to Andrew Ng’s infamous “Artificial Intelligence is like electricity”, Rahimi retorts “machine learning is like alchemy”. I’ve embedded the talk below, it kicks in at the point Rahimi starts this bit of argument. I confess I don’t understand the maths talk it is embedded within but I think this embodies the best of ‘science’/ academia – cut the bullshit, talk about what we don’t know as much as what we do.

Great opportunity > Internship with the Social Media Collective (Microsoft)

Twitter

Via Nancy Baym:

Call for applications! 2018 summer internship, MSR Social Media Collective

APPLICATION DEADLINE: JANUARY 19, 2018

Microsoft Research New England (MSRNE) is looking for advanced PhD students to join the Social Media Collective (SMC) for its 12-week Internship program. The Social Media Collective (in New England, we are Nancy Baym, Tarleton Gillespie, and Mary Gray, with current postdocs Dan Greene and Dylan Mulvin) bring together empirical and critical perspectives to understand the political and cultural dynamics that underpin social media technologies. Learn more about us here.

MSRNE internships are 12-week paid stays in our lab in Cambridge, Massachusetts. During their stay, SMC interns are expected to devise and execute their own research project, distinct from the focus of their dissertation (see the project requirements below). The expected outcome is a draft of a publishable scholarly paper for an academic journal or conference of the intern’s choosing. Our goal is to help the intern advance their own career; interns are strongly encouraged to work towards a creative outcome that will help them on the academic job market.

The ideal candidate may be trained in any number of disciplines (including anthropology, communication, information studies, media studies, sociology, science and technology studies, or a related field), but should have a strong social scientific or humanistic methodological, analytical, and theoretical foundation, be interested in questions related to media or communication technologies and society or culture, and be interested in working in a highly interdisciplinary environment that includes computer scientists, mathematicians, and economists.

Primary mentors for this year will be Nancy Baym and Tarleton Gillespie, with additional guidance offered by other members of the SMC. We are looking for applicants working in one or more of the following areas:

  1. Personal relationships and digital media
  2. Audiences and the shifting landscapes of producer/consumer relations
  3. Affective, immaterial, and other frameworks for understanding digital labor
  4. How platforms, through their design and policies, shape public discourse
  5. The politics of algorithms, metrics, and big data for a computational culture
  6. The interactional dynamics, cultural understanding, or public impact of AI chatbots or intelligent agents

Interns are also expected to give short presentations on their project, contribute to the SMC blog, attend the weekly lab colloquia, and contribute to the life of the community through weekly lunches with fellow PhD interns and the broader lab community. There are also natural opportunities for collaboration with SMC researchers and visitors, and with others currently working at MSRNE, including computer scientists, economists, and mathematicians. PhD interns are expected to be on-site for the duration of their internship.

Applicants must have advanced to candidacy in their PhD program by the time they start their internship. (Unfortunately, there are no opportunities for Master’s students or early PhD students at this time). Applicants from historically marginalized communities, underrepresented in higher education, and students from universities outside of the United States are encouraged to apply.

PEOPLE AT MSRNE SOCIAL MEDIA COLLECTIVE

The Social Media Collective is comprised of full-time researchers, postdocs, visiting faculty, Ph.D. interns, and research assistants. Current projects in New England include:

  • How does the use of social media affect relationships between artists and audiences in creative industries, and what does that tell us about the future of work? (Nancy Baym)
  • How are social media platforms, through their algorithmic design and user policies, taking up the role of custodians of public discourse? (Tarleton Gillespie)
  • What are the cultural, political, and economic implications of crowdsourcing as a new form of semi-automated, globally-distributed digital labor? (Mary L. Gray)
  • How do public institutions like schools and libraries prepare workers for the information economy, and how are they changed in the process? (Dan Greene)
  • How are media standards made, and what do their histories tell us about the kinds of things we can represent? (Dylan Mulvin)

SMC PhD interns may also have the opportunity to connect with our sister Social Media Collective members in New York City. Related projects in New York City include:

  • What are the politics, ethics, and policy implications of artificial intelligence and data science? (Kate Crawford, MSR-NYC)
  • What are the social and cultural issues arising from data-centric technological development? (danah boyd, Data & Society Research Institute)

For more information about the Social Media Collective, and a list of past interns, visit the About page of our blog. For a complete list of all permanent researchers and current postdocs based at the New England lab, see: http://research.microsoft.com/en-us/labs/newengland/people/bios.aspx

Read more.

‘Pax Technica’ Talking Politics, Naughton & Howard

Nest - artwork by Jakub Geltner

This episode of the ‘Talking Politics‘ podcast is a conversation between LRB journalist John Naughton and the Oxford Internet Institute’s Professor Phillip Howard ranging over a number of issues but largely circling around the political issues that may emerge from ‘Internets of Things’ (the plural is important in the argument) that are discussed in Howard’s book ‘Pax Technica‘. Worth a listen if you have time…

One of the slightly throw away bits of the conversation, which didn’t concern the tech, that interested me was when Howard comments on the kind of book Pax Technica is – a ‘popular’ rather than ‘scholarly’ book and how that had led to a sense of dismissal by some. It seems nuts (to me, anyway) when we’re all supposed to be engaging in ‘impact’, ‘knowledge exchange’ and so on that opting to write a £17 paperback that opens out debate, instead of a £80+ ‘scholarly’ hardback, is frowned upon. I mean I understand some of the reasons why but still…

Reblog> (video): Gillian Rose – Tweeting the Smart City

Smart City visualisation

Via The Programmable City.

Seminar 2 (video): Gillian Rose – Tweeting the Smart City

We are delighted to share the video of our second seminar in our 2017/18 series, entitled Tweeting the Smart City: The Affective Enactments of the Smart City on Social Media given by Professor Gillian Rose from Oxford University on the 26th October 2017 and co-hosted with the Geography Department at Maynooth University.Abstract
Digital technologies of various kinds are now the means through which many cities are made visible and their spatialities negotiated. From casual snaps shared on Instagram to elaborate photo-realistic visualisations, digital technologies for making, distributing and viewing cities are more and more pervasive. This talk will explore some of the implications of that digital mediation of urban spaces. What forms of urban life are being made visible in these digitally mediated cities, and how? Through what configurations of temporality, spatiality and embodiment? And how should that picturing be theorised? Drawing on recent work on the visualisation of so-called ‘smart cities’ on social media, the lecture will suggest the scale and pervasiveness of digital imagery now means that notions of ‘representation’ have to be rethought. Cities and their inhabitants are increasingly mediated through a febrile cloud of streaming image files; as well as representing cities, this cloud also operationalises particular, affective ways of being urban. The lecture will explore some of the implications of this shift for both theory and method as well as critique.

AI Now report

My Cayla Doll

The AI Now Institute have published their second annual report with plenty of interesting things in it. I won’t try and summarise it or offer any analysis (yet). It’s worth a read:

The AI Now Institute, an interdisciplinary research center based at New York University, announced today the publication of its second annual research report. In advance of AI Now’s official launch in November, the 2017 report surveys the current use of AI across core domains, along with the challenges that the rapid introduction of these technologies are presenting. It also provides a set of ten core recommendations to guide future research and accountability mechanisms. The report focuses on key impact areas, including labor and automation, bias and inclusion, rights and liberties, and ethics and governance.

“The field of artificial intelligence is developing rapidly, and promises to help address some of the biggest challenges we face as a society,” said Kate Crawford, cofounder of AI Now and one of the lead authors of the report. “But the reason we founded the AI Now Institute is that we urgently need more research into the real-world implications of the adoption of AI and related technologies in our most sensitive social institutions. People are already being affected by these systems, be it while at school, looking for a job, reading news online, or interacting with the courts. With this report, we’re taking stock of the progress so far and the biggest emerging challenges that can guide our future research on the social implications of AI.”

There’s also a sort of Exec. Summary, a list of “10 Top Recommendations for the AI Field in 2017” on Medium too. Here’s the short version of that:

  1. 1. Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems.
  2. 2. Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design.
  3. 3. After releasing an AI system, companies should continue to monitor its use across different contexts and communities.
  4. 4. More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR.
  5. 5. Develop standards to track the provenance, development, and use of training datasets throughout their life cycle.
  6. 6. Expand AI bias research and mitigation strategies beyond a narrowly technical approach.
  7. 7. Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed.
  8. 8. Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development.
  9. 9. The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power.
  10. 10. Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms.

Which sort of reads, to me, as: “There should be more social scientists involved” 🙂

CFP: Workshop on Trustworthy Algorithmic Decision-Making

Not sure where I found this, but it may be of interest…

Workshop on Trustworthy Algorithmic Decision-Making
Call for Whitepapers

We seek participants for a National Science Foundation sponsored workshop on December 4-5, 2017 to work together to better understand algorithms that are currently being used to make decisions for and about people, and how those algorithms and decisions can be made more trustworthy. We invite interested scholars to submit whitepapers of no more than 2 pages (excluding references); attendees will be invited based on whitepaper submissions. Meals and travel expenses will be provided.

Online algorithms, often based on data-driven machine-learning approaches, are increasingly being used to make decisions for and about people in society. One very prominent example is the Facebook News Feed algorithm that ranks posts and stories for each person, and effectively prioritizes what news and information that person sees. Police are using “predictive policing” algorithms to choose where to patrol, and courts are using algorithms that predict the likelihood of repeat offending in sentencing. Face recognition algorithms are being implemented in airports in lieu of ID checks. Both Uber and Amazon use algorithms to set and adjust prices. Waymo/Google’s self-driving cars are using Google maps not just as a suggestion, but to actually make route choices.

As these algorithms become more integrated into people’s lives, they have the potential to have increasingly large impacts. However, if these algorithms cannot be trusted to perform fairly and without undue influences, then there may be some very bad unintentional effects. For example, some computer vision algorithms have mis-labeled African Americans as “gorillas”, and some likelihood of repeat offending algorithms have been shown to be racially biased. Many organizations employ “search engine optimization” techniques to alter the outcomes of search algorithms, and “social media optimization” to improve the ranking of their content on social media.

Researching and improving the trustworthiness of algorithmic decision-making will require a diverse set of skills and approaches. We look to involve participants from multiple sectors (academia, industry, government, popular scholarship) and from multiple intellectual and methodological approaches (computational, quantitative, qualitative, legal, social, critical, ethical, humanistic).

Whitepapers

To help get the conversation started and to get new ideas into the workshop, we solicit whitepapers of no more than two pages in length that describe an important aspect of trustworthy algorithmic decision-making. These whitepapers can motivate specific questions that need more research; they can describe an approach to part of the problem that is particularly interesting or likely to help make progress; or they can describe a case study of a specific instance in the world of algorithmic decision-making and the issues or challenges that case brings up.

Some questions that these whitepapers can address include (but are not limited to):

  • What does it mean for an algorithm to be trustworthy?
  • What outcomes, goals, or metrics should be applied to algorithms and algorithm-made decisions (beyond classic machine-learning accuracy metrics)?
  • What does it mean for an algorithm to be fair? Are there multiple perspectives on this?
  • What threat models are appropriate for studying algorithms? For algorithm-made decisions?
  • What are ways we can study data-driven algorithms when researchers don’t always have access to the algorithms or to the data, and when the data is constantly changing?
  • Should algorithms that make recommendations be held to different standards than algorithms that make decisions? Should filtering algorithms have different standards than ranking or prioritization algorithms?
  • When systems use algorithms to make decisions, are there ways to institute checks and balances on those decisions? Should we automate those?
  • Does transparency really achieve trustworthiness? What are alternative approaches to trusting algorithms and algorithm-made decisions?

Please submit white papers along with a CV or current webpage by October 9, 2017 via email to trustworthy-algorithms@bitlab.cas.msu.edu. We plan to post whitepapers publicly on the workshop website (with authors’ permission) to facilitate conversation ahead of, at, and after the workshop. More information about the workshop can be found at http://trustworthy-algorithms.org.

We have limited funding for PhD students interested in these topics to attend the workshop. Interested students should also submit a whitepaper with a brief description of their research interests and thoughts on these topics, and indicate in their email that they are PhD students.

CFP: Theorising digital space

glitches image of a 1990s NASA VR experience

In another of a series of what feels dangerously like back-to-the-1990s moments as some geographers attempt to wrangle ‘digital geographies’ as a brand, which I find problematic, I saw the below CFP for the AAG.

I am sorry if it seems like I’m picking on this one CFP, I have no doubt that it was written with the best of intentions and if I were able to attend the conference I would apply to speak and attend it. I hope others will too. In terms of this post it’s simply the latest in a line of conference sessions that I think unfortunately seem to miss, or even elide, long-standing debates in geography about mediation.

Maybe my reaction is in part because I cannot attend (I’m only human, I’d quite like to go to New Orleans!), but it is also in part because I am honestly shocked at the inability for debates within what is after all a fairly small discipline to move forward in terms of thinking about ‘space’ and mediation. This stands out because it follows from ‘digital’ sessions at the AAG last year that made similar sorts of omissions.

In the late 1990s a whole host of people theorised place/space in relation to what we’re now calling ‘the digital’. Quite a few were geographers. There exists a significant and, sometimes, sophisticated literature that lays out these debates, ranging from landmark journal articles to edited books and monographs that all offer different views on how to understand mediation spatially (some of this work features in a bibliography I made ages ago).

Ironically, perhaps, all of this largely accessible ‘online’, you only need search for relevant key terms, follow citation chains using repositories – much of it is there, many of the authors are accessible ‘digitally’ too. And yet, periodically, we see what is in effect the same call for papers asking similar questions: is there a ‘physical’/’digital’ binary [no], what might it do, how do we research the ‘digital’, ‘virtual’ etc. etc.

We, all kinds of geographers, are not only now beginning to look at digital geographies, it’s been going on for some time and it would be great if that were acknowledged in the way that Prof. Dorothea Kleine did with rare clarity in her introduction to the RGS Digital Geographies Working Group symposium earlier this year (skip to 03:12 in this video).

So, I really hope that some of those authors of books like “Virtual Geographies“, to take just one example (there are loads more – I’m not seeking to be canonical!), might consider re-engaging with these discussions to lend some of perspective that they have helped accrue over the last 20+ years and speak at, or at least attend, sessions like this.

I hope that others will consider speaking in this session, to engage productively and to open out debate, rather than attempt to limit it in a kind of clique-y brand.

Theorizing Place and Space in Digital Geography: The Human Geography of the Digital Realm

In 1994 Doreen Massey released Space, Place and Gender, bringing together in a single volume her thoughts on many of the key discussions in geography in the 1980s and early 1990s. Of note was the chapter, A global sense of place, and the discussion on what constitutes a place. Massey argues that places, just like people, have multiple identities, and that multiple identities can be placed on the same space, creating multiple places inside space. Places can be created by different people and communities, and it is through social practice, particularly social interaction, that place is made. Throughout this book, Massey also argues that places are processional, they are not frozen moments, and that they are not clearly defined through borders. As more and more human exchanges in the ‘physical realm’ move to, or at least involve in some way, the ‘digital realm’, how should we understand the sites of the social that happen to be in the digital? What does a human geography, place orientated understanding of the digital sites of social interaction tell us about geography? Both that in the digital and physical world.

Massey also notes that ‘communities can exist without being in the same place – from networks of friends with like interests, to major religious, ethnic or political communities’. The ever-evolving mobile technologies, the widening infrastructures that support them and the increasing access to smartphones, thanks in part to new smart phone makers in China releasing affordable yet powerful smartphones around the world, has made access to the digital realm, both fixed in place (through computers) and, as well as more often, through mobile technologies a possibility for an increasing number of people worldwide. How do impoverished or excluded groups use smart technologies to (re)produce place or a sense of place in ways that include links to the digital realm? From rural farming communities to refugees fleeing Syria and many more groups, in what ways does the digital realm afford spatial and place making opportunities to those lacking in place or spatial security?

How are we to understand the digital geographies of platforms and the spaces that they give us access to? Do platforms themselves even have geographies? Recently geographers such as Mark Graham have begun a mapping of the dark net, but how should we understand the geographies of other digital spaces, from instant messaging platforms to social media or video streaming websites? What is visible and what is obscured? And what can we learn about traditional topics in social science, such as power and inequality, when we begin to look at digital geographies?

In this paper session for 5 papers we are looking for papers exploring:

  • Theories of place and space in the digital realm, including those that explore the relationship between the digital and physical realms
  • Research on the role of digital realm in (re)producing physical places, spaces and communities, or creating new places, spaces and communities, both in the digital realm and outside of it.
  • Papers considering relationship between physical and digital realms and accounts of co-production within them.
  • The role of digital technologies in providing a sense of space and place, spatial security and secure spaces and places to those lacking in these things.
  • Research exploring the geographies of digital platforms, websites, games or applications, particularly qualitative accounts that examine the physical and digital geographies of platforms, websites, games or applications.
  • Research examining issues of power, inequality, visibility and distance inside of the digital realm.

“Invisible Images: Ethics of Autonomous Vision Systems” Trevor Paglen at “AI Now” (video)

racist facial recognition

Via Data & Society / AI Now.

Trevor Paglen on ‘autonomous hypernormal mega-meta-realism’ (probably a nod to Curtis there). An entertaining brief talk about ‘AI’ visual recognition systems and their aesthetics.

(I don’t normally hold with laughing at your own gags but Paglen says some interesting things here – expanded upon in this piece (‘Invisible Images: Your pictures are looking at you’) and this artwork – Sight Machines [see below]).