Published> A very public cull – the anatomy of an online issue public

Twitter

I am pleased to share that an article I co-authored with Rebecca Sandover (1st author) and Steve Hinchliffe has finally been published in Geoforum. I would like to congratulate my co-author Rebecca Sandover for this achievement – the article went through a lengthy review process but is now available as an open access article. You can read the whole article, for free, on the Geoforum website. To get a sense of the argument, here is the abstract:

Geographers and other social scientists have for some time been interested in how scientific and environmental controversies emerge and become public or collective issues. Social media are now key platforms through which these issues are publicly raised and through which groups or publics can organise themselves. As media that generate data and traces of networking activity, these platforms also provide an opportunity for scholars to study the character and constitution of those groupings. In this paper we lay out a method for studying these ‘issue publics’: emergent groupings involved in publicising an issue. We focus on the controversy surrounding the state-sanctioned cull of wild badgers in England as a contested means of disease management in cattle. We analyse two overlapping groupings to demonstrate how online issue publics function in a variety of ways – from the ‘echo chambers’ of online sharing of information, to the marshalling of agreements on strategies for action, to more dialogic patterns of debate. We demonstrate the ways in which digital media platforms are themselves performative in the formation of issue publics and that, while this creates issues, we should not retreat into debates around the ‘proper object’ of research but rather engage with the productive complications of mapping social media data into knowledge (Whatmore, 2009). In turn, we argue that online issue publics are not homogeneous and that the lines of heterogeneity are neither simple or to be expected and merit study as a means to understand the suite of processes and novel contexts involved in the emergence of a public.

Our vascilating accounts of the agency of automated things

Rachael in the film Blade Runner

“There’s no hiding behind algorithms anymore. The problems cannot be minimized. The machines have shown they are not up to the task of dealing with rare, breaking news events, and it is unlikely that they will be in the near future. More humans must be added to the decision-making process, and the sooner the better.”

Alexis Madrigal

I wonder whether we have if not an increasing then certainly a more visible problem with addressing the agency of automated processes. In particular automation that functions predominantly through software, i.e. stuff we refer to as ‘algorithms’ and ‘algorithmic’, possibly ‘intelligent’ or ‘smart’ and perhaps even ‘AI’, ‘machine learning’ and so on.  I read three things this morning that seemed to come together to concretise this thought: Alexis Madrigal’s article in The Atlantic – “Google and Facebook have failed us“, James Somers’ article in The Atlantic – “The coming software apocalypse” and LM Sacacas’ blogpost “Machines for the evasion of moral responsibility“.

In Madrigal’s article we can see how the apparent autonomy of the ‘algorithm’ becomes the fulcrum around which machinations around ‘fake news’, in this case regarding the 2nd October 2017 mass shooting in Las Vegas. The apparent incapacity of an automated software system to perform the kinds of reasoning attributable to a ‘human’ editor is diagnosed on the one hand, and on the other the speed at which such breaking news events taking place and the volume of data being processed by ‘the algorithm’ led to Google admitting that their software was “briefly surfacing an inaccurate 4chan website in our Search results for a small number of queries”. Madrigal asserts:

It’s no longer good enough to shrug off (“briefly,” “for a small number of queries”) the problems in the system simply because it has computers in the decision loop.

In Somers’ article we can see how decisions made by programmers writing software that processed call sorting and volume for the emergency services in Washington State led to the 911 phone system being inaccessible to callers for six hours one night in 2014. As Somers describes:

The 911 outage… was traced to software running on a server in Englewood, Colorado. Operated by a systems provider named Intrado, the server kept a running counter of how many calls it had routed to 911 dispatchers around the country. Intrado programmers had set a threshold for how high the counter could go. They picked a number in the millions.

Shortly before midnight on April 10, the counter exceeded that number, resulting in chaos. Because the counter was used to generating a unique identifier for each call, new calls were rejected. And because the programmers hadn’t anticipated the problem, they hadn’t created alarms to call attention to it. Nobody knew what was happening. Dispatch centers in Washington, California, Florida, the Carolinas, and Minnesota, serving 11 million Americans, struggled to make sense of reports that callers were getting busy signals. It took until morning to realize that Intrado’s software in Englewood was responsible, and that the fix was to change a single number.

Quoting an MIT Professor of aeronautics (of course) Nancy Leveson, Somers observes: “The problem,” Leveson wrote in a book, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”

Michael Sacasas in his blogpost refers to Madrigal’s article and draws out arguments that the complex processes of software development, maintenance and the large and complicated organisations such as Facebook are open to those working there to work in a ‘thoughtless’ manner:

“following Arendt’s analysis, we can see more clearly how a certain inability to think (not merely calculate or problem solve) and consequently to assume moral responsibility for one’s actions, takes hold and yields a troubling and pernicious species of ethical and moral failures. …It would seem that whatever else we may say about algorithms as technical entities, they also function as the symbolic base of an ideology that abets thoughtlessness and facilitates the evasion of responsibility.”

The simplest version of what I’m getting at is this: on the one hand we attribute significant agency to automated software processes, this usually involves talking about ‘algorithms’ as quasi- or pretty much autonomous, which tends to infer that whatever it is we’re talking about, e.g. “Facebook’s algorithm”, is ‘other’ to us, ‘other’ to what might conventionally be characterised as ‘human’. On the other hand we talk about how automated processes can encode the assumptions and prejudices of the creators of those techniques and technologies, such as the ‘racist soap dispenser‘.

There’s a few things we can perhaps note about these related but potentially contradictory narratives.

First, they perhaps infer that the moment of authoring, creating, making, manufacturing is a one-off event – the things are made, the software is written and it becomes set, a bit like baking a sponge cake – you can’t take the flour, sugar, butter and eggs out again. Or, in a more nuanced version of this point – there is a sense that once set in train these things are really, really hard to change, which may, of course, be true in particular cases but also may not be a general rule. A soap dispenser’s sensor may be ‘hard coded’ to particular tolerances, whereas what gets called ‘Facebook’s algorithm’, while complicated, is probably readily editable (albeit with testing, version control and so on). This kind of narrative freights a form of determinism – there is an implied direction of travel to the technology.

Second, the kinds of automated processes I’m referring to here, ‘algorithms’ and so on, get ‘black boxed’. This is not only on the part of those who create, operate and benefit from those processes—like those frequently referred to Google, Facebook, Amazon and so on—but also in part by those who seek to highlight the black boxing. As Sacasas articulates: “The black box metaphor tries to get at the opacity of algorithmic processes”. He offers a quote from a series of posts by Kevin Hamilton which illustrates something of this:

Let’s imagine a Facebook user who is not yet aware of the algorithm at work in her social media platform. The process by which her content appears in others’ feeds, or by which others’ material appears in her own, is opaque to her. Approaching that process as a black box, might well situate our naive user as akin to the Taylorist laborer of the pre-computer, pre-war era. Prior to awareness, she blindly accepts input and provides output in the manufacture of Facebook’s product. Upon learning of the algorithm, she experiences the platform’s process as newly mediated. Like the post-war user, she now imagines herself outside the system, or strives to be so. She tweaks settings, probes to see what she has missed, alters activity to test effectiveness. She grasps at a newly-found potential to stand outside this system, to command it. We have a tendency to declare this a discovery of agency—a revelation even.

In a similar manner to the imagined participant in Searle’s “Chinese Room” thought experiment, the Facebook user can only guess at the efficacy of their relation to the black boxed process. ‘Tweaking our settings’ and responses might, as Hamilton suggest, “become a new form of labor, one that might then inevitably find description by some as its own black box, and one to escape.” A further step here is that even those of us diagnosing and analysing the ‘black boxes’ are perhaps complicit in keeping them in some way obscure. As Evan Selinger and Woodrow Hartzog argue: things that are obscure can be seen as ‘safe’, which is the principle of cryptography. Obscurity, for Selinger & Hartzog, “is a protective state that can further a number of goals, such as autonomy, self-fulfillment, socialization, and relative freedom from the abuse of power”. Nevertheless, obscurity can also be an excuse – the black box is impenetrable, not open to analysis and so we settle on other analytic strategies or simply focus on other things. A well-worn strategy seems to be to retreat to the ontological, to which I’ll return shortly.

Third, following from above, perhaps the ways in which we identify ‘black boxes‘ or the forms of black boxing we do ourselves over-simplifies or elides complexity. This is a difficult balancing act. A good concept becomes a short-hand that freights meaning in useful ways. However, there is always potential that it can hide as much as it reveals. In the case of the phenomena outlined in the two articles above, we perhaps focus on the ends, what we think ‘the algorithm’ does – the kinds of ‘effects’ we see, such as ‘fake news’ and the breakdown of an emergency telephone system, or even a ‘racist soap dispenser’. It is then very tempting to perform what Sally Wyatt calls a ‘justifactory’ technological determinism – not only is there a ’cause and effect’ but these things were bound to happen because of the kinds of technological processes involved. By fixing ‘algorithms’ as one kind of thing, we perhaps elide the ways in which they can be otherwise and, perhaps more seriously, elide the parts of the process of the development, resources, use and reception of those technologies and their integration into wider sociotechnical systems and society. These things don’t miraculously appear from nowhere – they are the result of lots of actions and decisions, some banal, some ‘strategic’, some with good intentions and some perhaps morally-questionable. By black boxing ‘the algorithm’, attributing ‘it’ with agency and making it ‘other’ to human activities we ignore or obscure the organisational processes that make it possible at all. I argue we cannot see these things as completely one thing or the other: the black boxed entity or the messy sociotechnical system, but rather as both and need to accommodate that sort of duality in our approaches to explanation.

Fourth, normative judgements are attached to the apparent agency of an automated system when it is perceived as core to the purpose of the business. Just like any other complicated organisation whose business becomes seen as a ‘public good’ (energy companies might be another example), competing, perhaps contradictory, narratives take hold. The purpose of the business may be to make money–in the case of Google and Facebook this is of course primarily through advertising, requiring attractive content to which to attach adverts–but the users perhaps consider their experience, which is ‘free’, more important. It seems to have become received wisdom that the very activities that drive the profits of the company, by boosting content that drives traffic and therefore serves more advertising and I assume therefore resulting in more revenue, run counter to accepted social and moral norms. This exemplifies the competing understandings of what companies like Google and Facebook do – in other words, what their ‘algorithms’ are for. This has a bearing on the kinds of stories we then tell about the perceived, or experienced, agency of the automated system.

Finally (for now), there is a tendency for academic social scientific studies of automated software systems to resort to ontological registers of analysis. There may be all sorts of reasons used as justification for this, such as specific detail of a given system is not accessible, or (quite often) only accessible through journalists, or the funding isn’t available to do the research. However, it also pays dividends to do ‘hard’ theory. In the part of academia I knock about in, geography-land and it’s neighbours, technology has been packaged up into the ‘non-human’ whereby the implication is that particular kinds of technology are entirely separate from us, humans, and can be seen to have ‘effects’ upon us and our societies. This is trendy cos one can draw upon philosophy that has long words and hard ideas in it, in particular: ‘object oriented ontology‘ (to a much lesser extent the ‘bromethean‘ accellerationists). The generalisable nature of ‘big’ theory is beguiling, it seems to permit us to make general, perhaps global, claims and often results in a healthy return in the academic currency of citations. Now, I too am guilty of resorting to theory, which is more or less abstract, through the work of Bernard Stiegler in particular, but I’d like to think I haven’t disappeared down the almost theological rabbit hole of trying to think objects in themselves through abstract language such as ‘units‘ or ‘allopoetic objects‘ and ‘perturbations’ of non-human ‘atmospheres’.

It seems to me that while geographers and others have been rightly critical of simplistic binaries of human/technical, there remains a common habit of referring to a technical system that has been written by and is maintained by ‘humans’ as other to whatever that ‘human’ apparently is, and to refer to technologically mediated activities as somehow extra-spatial, as virtual, in contra-distinction to a ‘real’. This is plainly a contradiction. On the one hand this positions the technology in question (‘algorithms’ and so on) as totally distinct from us, imbued with an ability to act without us and so potentially powerful. On the other hand if that technology is ‘virtual’ and not ‘real’ it implies it doesn’t count in some way. While in the late 90s and early 00s the ‘virtual’ technologies we discussed were often seen as somewhat inconsequential, the more contemporary concerns about ‘fake news’, malware and encoded prejudices (such as racism) have made automated software systems part of the news cycle. I don’t think it is a coincidence that we’ve moved from metaphors of liberty and community online to metaphors of ‘killer robots’, like the Terminator (of course there is a real prospect of autonomous weapons systems, as discussed elsewhere).

In the theoretical zeal of ‘decentering the human subject’ and focusing on the apparent alterity of technology, as abstract ‘objects’, we are at risk of failing to address the very concerns which are expressed in the articles by Madrigal and Somers. In a post entitled ‘Resisting the habits of the algorithmic mind‘, Sacasas suggests that automated software systems (‘algorithms’) are something like an outsourcing of problems solving ‘that ordinarily require cognitive labor–thought, decision making, judgement. It is these very activities–thinking, willing, and judging–that structure Arendt’s work in The Life of the Mind.’ The prosthetic capacity of technologies like software to in some way automate some of these processes might be liberating but they are also, as Sacasas suggests, morally and politically consequential. To ‘outsource the life of the mind’ for Sacasas means to risk being ‘habituated into conceiving of the life of the mind on the model of the problem-solving algorithm’. A corollary to this supposition I would argue is that there is a risk in the very diagnosis of this problem that we habituate ourselves to a determinism as well. As argued in the third point, above, we risk obscuring the organisational processes that make such sociotechnical systems possible at all. In the repetition of arguments that autonomous, ‘non-human’, ‘algorithms’ are already apparently doing all of these problematic things we will these circumstances upon ourselves. There is, therefore, an ethics to thinking about and analysing automation too.

Where does this leave us? I think it leaves us with some critical tools and tasks. We perhaps need not to shy away from the complexity of the systems we discuss – the ideas and words we use can do work for us, ‘algorithm’ for example freights some meaning, but we perhaps need to be careful we don’t obscure as much as we reveal. We perhaps need to use more, not fewer, metaphors. We definitely need more studies that get at the specificity of particular forms, processes and work of automation/automated systems. All of us, journalists and academics alike, need to perhaps use our words more carefully, or use more words to get at the issues.

Simply hailing the ‘rise of the robots’ is not enough. I think this reproduces an imagination of automation that is troubling and ought to be questioned (what I’ve called an ‘automative imaginary’ elsewhere, but maybe that’s too prosaic). For people like me in geography-land to retreat into ‘high’ theory and to only discuss abstract ontological/ metaphysical attributes of technology seems to me to be problematic and is a retreat from that part of the ‘life of the mind’ we claim to advance. I’m not arguing we need not retreat from theory we simply need to find a balance. A crucial issue for social science researchers of ‘algorithms’ and so on is that this sort of work is probably not the work of a lone wolf scholar, I increasingly suspect that it needs multi-disciplinary teams. It also needs to, at least in part, produce publicly accessible work (in all senses of ‘accessible’). In this sense work like the report on ‘Media manipulation and disinformation online‘ by Data & Society seems like necessary (but by no means the only) sorts of contribution. Prefixing your discipline with ‘digital’ and reproducing the same old theory but applied to ‘digital’ things won’t, I think, cut it.

Critical and Creative Ethnography After Human Exceptionalism. Brilliant new publication from @annegalloway

Amidst the slog of marking a shining jewel-like piece of inspiration appeared in my inbox – one of my academic heroes Anne Galloway shared a draft of what is a fantastic chapter for a brilliant book, which is set to be published later this year (what a great editorial team too!). Anne has posted about this on her lab’s blog, so I am reposting some of that post… however, go and read it on the More-than-human Lab blog!

I’m pleased to announce that The Routledge Companion to Digital Ethnography, edited by Larissa Hjorth, Heather Horst, myself & Genevieve Bell, will be published later this year.

For the companion I also contributed a chapter called “More-Than-Human Lab: Critical and Creative Ethnography After Human Exceptionalism.”

Anne shares the introductory paragraph, which I think wonderfully performs precisely the ethos of praxis she explores in the chapter:

Haere mai. Welcome. This story starts with an introduction so that the reader can know who I am, and how I have come to know what I know. My name is Anne Galloway. My mother’s family is Canadian and my father’s family is British. Born and raised outside both places, for the past seven years I have been tauiwi, or non-Māori, a settler in Aotearoa-New Zealand. I have always lived between cultures and have had to forge my own sense of belonging. Today I am in my home, on a small rural block in the Akatarawa Valley of the Tararua ranges, at the headwaters of the Waikanae River, on the ancestral lands of MuaÅ«poko (Ngāi Tara) & Ngāti Toa, with my partner, a cat, seven ducks, five sheep–four of whom I hope are pregnant–and a multitude of extraordinary wildlife. The only way I know how to understand myself is in relation to others, and my academic career has been dedicated to understanding vital relationships between things in the world. Most recently, I founded and lead the More-Than-Human Lab, an experimental research initiative at Victoria University of Wellington. Everything I have done has led me to this point, but for the purposes of this chapter I want to pull on a single thread. This is a love story for an injured world, and it begins with broken bones”¦

Anne offers more excerpts and explanation in her blogpost and her full reference list (so please do read it!).

I did however want to share a brief snippet of one of the many bits I love from the chapter:

As more technological devices connect people to things in the world, and as more data are collected about people and things, digital ethnography stands to make an important contribution to our understanding of constantly shifting relations. When combined with speculative design that translates realist narratives into fantastic stories, I also believe we can inject hope into spaces, times and relations where it seems most unlikely.

For me,  Anne’s reading of a feminist ethics of care: for knowledge for our ‘selves’ and for our decentred place in the vital soup of our (transindividuated) becoming, as a part of contemporary ethnographic praxis is really valuable and we would all do well to involve ourselves in the conversation which Anne invites.

The image at the top comes from Anne’s twitter feed, it’s one of her own sheep:

Reblog> Sheep Time? – @annegalloway on the technics of breeding sheep

Good stuff from Anne –– watch the video!

I think the reciprocal negotiation of what might a kind of ‘species time’ between our different measurements of change (diurnal, seasonal etc.) and the corporeal (generational) changes in the animals bodies is really interesting –– offers an insight into the temporalities of particular kinds of interspecies ‘biopower‘ (or maybe it’s not ‘biopower’ [pace Foucault] ..? This is an interesting argument too).

Sheep Time?

I’m always trying to figure out new ways of doing remote presentations so I was pretty excited when I was invited to prepare something that could be continually broadcast (or looped) during the Temporal Design: Surfacing Everyday Tactics of Time Workshop @ Design Informatics, Edinburgh University, 28 September, 2015.

Here’s what I did:

Sheep Time from More-Than-Human Lab on Vimeo. ]

And here’s some of the Twitter conversation about it:

Screen Shot 2015-10-01 at 2.04.59 pm

Screen Shot 2015-10-01 at 2.11.28 pm

Screen Shot 2015-10-01 at 2.09.33 pm

Screen Shot 2015-10-01 at 2.07.57 pm

Screen Shot 2015-10-01 at 2.08.20 pm

Screen Shot 2015-10-01 at 2.13.10 pm

Reblog> CFP An Informational Right to the City? Rethinking the Production, Consumption, and Governance of Digital Geographic Information

Over on the OII blog for the ‘Connectivity, Inclusion and Inequality Group’, Jo Shaw has posted an interesting CFP for the AAG in San Francisco next year. Andy Merrifield will make an awesome discussant!

CfP at AAG 2016: An Informational Right to the City? Rethinking the Production, Consumption, and Governance of Digital Geographic Information

After presenting our paper on the same theme at ICCG 2015 in Palestine, myself and Mark Grahamare planning a session for the Association of American Geographers’ annual meeting in San Francisco during Spring 2016. The session aims to provide a space for discussion and research that seeks to ask what a ‘right to the city’ looks like in our increasingly digital world. Interested contributors should read the call for papers below and drop myself or Mark an email to express interest.

 

An Informational Right to the City? Rethinking the Production, Consumption, and Governance of Digital Geographic Information

Organisers: Joe Shaw and Mark Graham (University of Oxford)
Discussant: Andy Merrifield

Henri Lefebvre (2003:251) once talked of the right to information as a complement to the right to the city. Since then, information communication technologies have become integrally embedded into much of everyday life. The speed of these developments has also obfuscated many changes and processes that now envelop and define the urban experience. This includes changes in systems of abstract spatial representation through geographic information, and the economies surrounding this information as a commodity. The representations which are produced and mediated through this digital information are now contributing to an urban space that is densely digitally layered (Graham, M., M. Zook, and A. Boulton. 2013). These digital ‘abstract’ spaces are essential to the production and re-production of our socio-economic world (Lefebvre, 1991). From Wikipedia to Google Maps and TripAdvisor, the code and content that relates to a building is now potentially as important as its bricks and mortar. These processes raise new questions of spatial justice and the urbanization of information: Which spaces are seen, and which are hidden? How is information produced, for whom, and who consumes it? How does information change material places? Who are the powerful actors in these events, and who are powerless? And finally, is the broader concept of an ‘informational right to the city’ now required? If so, how should it be envisioned and put into practice? This session invites submissions concerned with the production, consumption, and governance of urban geographic information, and it encourages research and reflections that seek to rethink what informational rights we have in our hybrid material/digital cities.

 

Lefebvre, H. (1991). The production of space. (D. Nicholson-Smith & D. N.- Smith, Eds.). Oxford: Oxford : Blackwell.
Lefebvre, H. (2003). Henri Lefebvre: key writings. (S. Elden, E. Lebas, & E. Kofman, Eds.). New York: New York.
Graham, M., Zook, M., & Boulton, A. (2013). Augmented reality in urban places: contested content and the duplicity of code. Transactions of the Institute of British Geographers, 38(3), 464–479.

Please email abstracts of no more than 300 words to Mark Graham (mark.graham@oii.ox.ac.uk) or Joe Shaw (joe.shaw@oii.ox.ac.uk) by Friday 2nd October 2015. Successful submissions will be contacted by 9th October 2015 and will be expected to pay the registration fee and submit their abstracts online at the AAG website by October 29th 2015.

Some thoughts about how ‘algorithms’ are talked about & what it might mean to study them

A while ago, in a grump, I tweeted something along the lines of “I’m fairly convinced that most social scientists who write about algorithms do not understand what that term means”… Provocative I know, but I was, like I said, in a grump. Rob Kitchin tweeted me back saying that he looked forward to the blogpost – well here it is – finally.

I’m afraid its not going to be a well-structured and in-depth analysis of what one may or may not mean by the word algorithm because, well, other people have already done that. What I want to offer here is an expression of the anxiety that lay behind my grumpy tweet and means of mitigating that. So, this is a post that uses other people’s work to address:

  • what is an algorithm?
  • how/should we analyse/study them?

To be totally open from the start, my principle sources here are:

So, what do we mean when we use the word ‘algorithm’?

To begin with a straight-forward answer from Paul Ford:

“Algorithm” is a word writers invoke to sound smart about technology. Journalists tend to talk about “Facebook’s algorithm” or a “Google algorithm,” which is usually inaccurate. They mean “software.”

Algorithms don’t require computers any more than geometry does. An algorithm solves a problem, and a great algorithm gets a name. Dijkstra’s algorithm, after the famed computer scientist Edsger Dijkstra, finds the shortest path in a graph. By the way, “graph” here doesn’t mean  but rather 

Or, as Rob Kitchin notes in his working paper it is (following Kowalski) “logic+control“, such that (citing Miyazaki) the term denotes a form of:

specific step-by-step method of performing written elementary arithmetic… [and] came to describe any method of systematic or automatic calculation.

This answers a simple definitional question: “what is an algorithm?” but doesn’t quite answer the question I’ve posed: “what do we mean when we use the word algorithm?”, which Ford gestures towards when suggesting journos (and one might include a lot of academics here) use the word ‘algorithm’ when they mean ‘software’. As Gillespie notes

there is a sense that the technical communities, the social scientists, and the broader public are using the word in different ways

There isn’t a single/singular meaning of the word then (of course!) and after decades of post-structuralism that really shouldn’t be a surprise… nevertheless there is a kind of discursive politics being performed when we (geographers, social scientists etc etc) invoke the term and idea of an ‘algorithm’, and we perhaps need to reflect upon that a little more than we do. I may be wrong, perhaps we just need to let the signifier/signified relation flex and evolve – my main motivation for addressing this question is that I think we do already have useful words that address what is being suggested in the use of the word ‘algorithm’ – amongst these words are: code, function (as in software function), policy, programme, protocol (e.g. Ã  la Alexander Galloway), rule and software.

What is salient about the technical definition of an algorithm to how we might use the word more broadly is the sense of a (logical) model of inferred relations between things formally defined in code that is developed through iteration towards a particular end. As Gillespie notes:

Engineers choose between them based on values such as how quickly they return the result, the load they impose on the system’s available memory, perhaps their computational elegance. The embedded values that make a sociological difference are probably more about the problem being solved, the way it has been modeled, the goal chosen, and the way that goal has been operationalised.

What quickly takes us away from simply thinking about a function defined in code is that the programmes or scripts upon which the algorithms are founded need to be validated in some way – to test their effectiveness, and this involves using test data (another word that has become fashionable). The selection of the data also necessarily has embedded assumptions, values and workarounds which, as Gillespie goes on to suggest: “may also be of much more importance to our sociological concerns than the algorithm learning from it.” The code that represents the algorithm is instantiated either within a software programme – a collection of instructions, operations, functions etc. etc. – that is bundled together as what we used to call an ‘application’ or it might exist in a ‘script’ of code that gets pulled into use as and when – for example: in the context of a website. As Gillespie argues:

these exhaustively trained and finely tuned algorithms are instantiated inside of what we might call an application, which actually performs the functions we’re concerned with. For algorithm designers, the algorithm is the conceptual sequence of steps, which should be expressible in any computer language, or in human or logical language. They are instantiated in code, running on servers somewhere, attended to by other helper applications (Geiger 2014), triggered when a query comes in or an image is scanned. I find it easiest the think about the difference between the “book” in your hand and the “story” within it. These applications embody values as well, outside of their reliance on a particular algorithm.

This is often missed when algorithms are invoked, all-too-quickly, in the discussion of contemporary phenomena that involve the use of computing in some way. Like Kitchin says in his working paper:

As a consequence, how algorithms are most often understood is very narrowly framed and lacking in critical reflection.

Indeed, I’d go further, there’s a weird sense in which some discussions of ‘algorithms’ connote the dystopian sci-fi of The Matrix, or The Terminator. Ian Bogost has gone so far as to suggest that this is a form of ‘worship’ of the idea of an ‘algorithm’, suggesting

algorithms hold a special station in the new technological temple because computers have become our favorite idols.

 

And as Gillespie noted in an earlier piece:

there is an important tension emerging between what we expect these algorithms to be, and what they in fact are

Yet, as with production of religious texts, there are people making decisions every step of the way in the production of software. On top of that, there were decisions made by people in all of the steps of the development and production of the other technologies and infrastructures upon which that software rely.

What we mean when we use the word ‘algorithm’ is, as Gillespie argues (much better than I), a synecdoche: “when we settle uncritically on this shiny, alluring term, we risk reifying the processes that constitute it. All the classic problems we face when trying to unpack a technology, the term packs for us”.

Calling the complex sociotechnical assemblage an “algorithm” avoids the need for the kind of expertise that could parse and understand the different elements; a reporter may not need to know the relationship between model, training data, thresholds, and application in order to call into question the impact of that “algorithm” in a specific instance. It also acknowledges that, when designed well, an algorithm is meant to function seamlessly as a tool; perhaps it can, in practice, be understood as a singular entity. Even algorithm designers, in their own discourse, shift between the more precise meaning, and using the term more broadly in this way.

How should we study algorithms?

Here, I’m not totally sure I can add much to what Kitchin has written in his working paper. This has been adroitly summarised by John Danaher on his blog, and he produced this brilliant graphic:

The challenges and methods for studying algorithms

 

We need to be alive to the challenge that most software we may be interested in is proprietary, although not always and we can look to repositories like GitHub for plenty of open source code, and so it may actually prove impossible to gain access to the code itself –– and here we’d really want the code of the whole programme, and perhaps the training data too. Likewise, software like any complex endeavour may well be the result of collective authorship and maintained by lots of different people. So there are complex sets of relations between people, laws, protocols, standards and many other considerations to negotiate – they are contextually embedded. Furthermore, the programmes we’re calling algorithms here actually have a hand in producing the world within which they exist –– they may bring new kinds of entities and relationships into existence, they may formulate new and different forms of spatial relation and understanding. In this sense they are ontogenetic and performative. Added to this, once ‘in the wild’ this performativity can render the kinds of outcomes of a programme unexpected and peculiar, especially once it is fed all sorts of data, adapted and adopted in unexpected ways.

How can we go about studying these kinds of socio-technical systems then? Well, rather than treat as discrete the six techniques offered by Kitchin (summarised above) – I’d argue we need to combine most of these. Even so, it may prove extremely difficult to actually gain access to code. Further, even if one might reflexively produce code and/or attempt to reverse engineer software –– some systems are the product of companies with such extensive resources that it may well prove near-impossible to do so. Where social scientists might find more traction and actually be able to make a more valuable contribution is, as Kitchin suggests, looking t the full sociotechnical assemblage. We can look at the wider institutional, legal and political apparatuses (the dispositifs) and we can certainly look at the various kinds of relation the assemblages make and how they are enrolled in performing the world they inhabit.

Make no mistake – this kinds of research is necessarily hard. I’m not sure I can imagine papers in geography journals that tie the a/b testing logs of experiments in how a given system works and commit logs to version control systems (even if you could access them!) to particular forms of experience and/or their political consequences… but it might be worth a go. Perhaps this difficulty is why we see (and I am as guilty as any of this) papers written in abstract terms, focusing on the (social) theoretical ways we can talk about these sociotechnical systems in broad terms.

Like Bogost, I can see a kind of pseudo-theological romancing of the ‘algorithm’ and the agency of software in much of what is written about it, and its sort of easy to see why – it is so abundant and yet relatively hidden. We see effects but do not see the systems that produce them:

Algorithms aren’t gods. We need not believe that they rule the world in order to admit that they influence it, sometimes profoundly. Let’s bring algorithms down to earth again.

The algorithm as synecdoche is a kind of ‘talisman’, as Gillespie argues, that reveals something of what Stiegler calls our ‘originary technicity’ – the sense in which we (humans) have always already been bound up in technology and its the forgetting and then remembering of this that forges our ongoing reconstitution (transindividuation) of ourselves.

So, I’d like to end with Kitchin’s argument:

it is clear that much more critical thinking and empirical research needs to be conducted with respect to algorithms [and the software and sociotechnical systems in which they are necessarily embedded] and their work.

Addendum:

It is worth noting that both Gillespie and Kitchin draw on another paper that is very much worth reading by Nick Seaver, who reminded me of this on Twitter, take a look:

Knowing Algorithms“, presented at Media in Transition 8, Cambridge, MA, April 2013 [revised 2014].

Museum of Contemporary Commodities

I have meant, and singularly failed, to blog about this fantastic project for a while, so I mean to makes amends now:

The Museum of Contemporary Commodities (MoCC) is an artistic, collaborative and activist project cooked up by the excellent Paula Crutchlow – who just so happens to be a PhD student in Exeter Geography Dept. The project is a collaboration with my colleague Ian Cook and several other interesting and exciting people – not least the folk at Furtherfield.

Funnily enough, I learned about MoCC when it was in its larval stages before I joined the department at Exeter – Ian and Paula held a ‘Thinkering Day’ here in Devon and we explored some ideas around what MoCC could be… the event was supported by REACT (hence why I learned about it) – here’s a snazzy video:

You should definitely explore the MoCC website to learn more, but here’s how MoCC is explained on the Furtherfield website:

The Museum of Contemporary Commodities (MoCC) is neither a building nor a permanent collection of stuff – it’s an invitation. To consider every shop, online store and warehouse full of stuff as if it were a museum, and all the things in it part of our collective future heritage.

Imagine yourself as this museum’s curator with the power to choose what is displayed and how. To trace and interpret the provenance and value of these things and how they arrived here. To consider the effects this stuff has on people and places close by or far away, and how and why it connects them.

What do we mean by things or stuff? Everything that you can buy in today’s society. The full range of contemporary commodities available to consume.

Please join us on our journey by browsing and adding to our collection, attending an event, and becoming a researcher. We are currently curating connections between trade-place-data-values.

There are further events planned and lots more exciting things to come so keep an eye on both the Furtherfield and the MoCC websites.

Culture & technology – the text of my ‘provocation’

The video recordings of the two panel sessions for the Provocations of the Present event at the OU earlier this month have been posted online (links at the bottom of this page). The event was an enjoyable opportunity to engage with a range of contemporary concerns in cultural geography and a moment of reflection on the politics of such concerns – refracted through engagements with Doreen Massey’s work (it was, after all, the ‘6th annual Doreen Massey event’) and Peter Jackson’s Maps of Meaning (which, as Phil Crang observed, is in its ‘silver jubilee’ year, having been published 25 years ago).

The panelists were asked to offer a ‘provocation’ in advance of the event that would be posted to the Geography Matters Facebook page in order to promote discussion. We were instructed to write short provocations that made punchy points – a difficult task for a wordy academic(!) but this is my contribution:

All culture is in some way technical: it is an expression of technicity, understood as the ways in which technologies (in their broadest sense) are intimately enfolded into our experience of the world. How does this challenge or (re)configure understandings of ‘the human’ in human geographies? What kinds of politics does such an understanding of culture either reveal or elide?

In the talk I gave at the actual event I developed some of these themes in two ways: first, I asked how contemporary forms of mediation might prompt cultural geographers to think about popular culture and the ways we study it; and: second, I questioned what kinds of assumptions about ‘the human’ are we making when we talk about the ‘non-human’ or the ‘post-human’, and how does this influence how we talk about technology in relation to culture.

So, à propos of nothing apart from making this text do something more than be read aloud, here is what was basically a script of my talk:

Provocation:

As the introduction to the event highlights, in the conclusion to Maps of Meaning Peter Jackson offers the provocation that continues to resonate in contemporary debates around cultural geography, whether we are debating affect and emotion, identity and difference or the human and non-human:

If cultural geography is to be revitalised, … ‘It can only be by an engagement with the contemporary intellectual terrain – not to counter a threat, but to discover an opportunity’ (Jackson 1989: 180; Stedman Jones 1983: 24).

I want to address the kinds of discovery we might engage in, and in particular focus on two ‘provocative’ questions about what counts in the study of cultural geography. These carry with them the implications of the earlier provocation I was invited to post to Facebook that: all culture is in some way technical: it is an expression of technicity, understood as the ways in which technologies (in their broadest sense) are intimately enfolded into our experience of the world.

So, the first question I will pose is in relation to ideas about popular culture; and the second is in relation to the status of the human in cultural (human) geography.

My first provocation, then, is: should cultural geography be more vulgar? Which can be phrased differently as: how might we better accommodate geographies of ‘popular culture’ in our cultural geographies?

In a recent essay on the recurring accusation of ‘vulgar Marxism’, Makenzie Wark articulates several ways in which the epithet of vulgar gets used and I want to discuss a couple here.

First, vulgarians too readily align themselves with the interests of the working class, or to put it in Clive Barnett’s words, “the implication is that, by even suggesting that there may be some relationship between the higher things in life (opera, literature, fine wine and so on) and base considerations like work, causal explanation itself is guilty of bad taste”.

Second, vulgarians don’t think in great revolutionary leaps but rather in more modest durations. We might think here about the mundane and the everyday, and the micro- spatio-temporalities of repetitive and taken for granted tasks.

What then, might it mean to do ‘vulgar’ cultural geography?

One might argue that the first twenty-something years of ‘new’ cultural geography are founded on readings: reading various landscapes and other spatial formations as texts – as the Duncan’s and others asked of us in the late 1980s. This, of course, speaks to a particular understanding of the medium and the expression of culture: a deliberative cogitation on particular constellations of meaning. One might argue that this implicitly goes hand-in-hand with a particular aesthetics of literature, when asked to ‘read’ landscapes it is analogous to reading Charles Dickens or Milan Kundera, and not Dan Brown or EL James. We apparently ought to aspire to an ‘unbearable lightness’ and not ‘fifty shades’ of cultural geography.

To attend to the everyday and to the popular one might suggest that different media and forms of expression of culture can become more suitable analogies. What does it mean to think in terms of television, radio, the smart phone and the tablet? So, perhaps a range of different sensibilities of watching, listening or touching needs to be added to reading. Such sensibilities might be more accommodating to recent proposals of atmospheres, not least by Ash and Anderson, in addition to landscapes of meaning and sensation. I am thinking in particular here of recent work by geographers that has been described as post-phenomenological (for example, see Ash & Simpson’s forthcoming piece in Progress in Human Geography).

Our techniques for thinking are intimately tied to the mediums through which we express thought and so, to pursue the metaphor, as increasingly ‘multimedia’ scholars we might well supplement the reading of landscapes as text with the watching of, listening to and touching forms of spatial experience as image and sound and haptics.

The subtitle to this event: what culture for what geography? Is for me pregnant with an additional question, so my second point of provocation is the question what do we mean by the “human” in human geography as it is discussed today? and in particular how might this relate to the various ways in which we figure things as not or perhaps after whatever it is we mean by human?

We have all variously framed our research in relation to different formulations of ‘the human’ – humanistic geographies, and their critique; the non-human; the post-human, but the category of the human can still be left as assumed. Such assumptions behind this prefixed ‘human’ condition and constitute how we understand and describe experiences of spatiality. So, this strikes at the heart of a key, ongoing, concern for cultural geography and the contribution we as cultural geographers can continue make.

Specifically, I want to raise this in terms of the implications of our various uses of the non-human, the post-human and invocations of technology.

As others, including Castree and Nash, have observed the assumptions that ‘the human’ carries with it are expressed in binaries – the normative human and its others. Nevertheless, by proposing a prefixed, non- and post- human we risk slipping back into reaffirming the category of the ‘bracketed off’ or ‘exceptional’ human, and so need to remain vigilant.

In particular, technology has been packaged up into the ‘non-human’ and particularly ‘post-human’ whereby the implication is that particular kinds of technology are entirely separate from us, humans, and can be seen to have ‘effects’ upon us and our societies. This is acutely evident in relation to digital technologies through forms of spatial imagination that can imply servitude to an autonomous and powerful technical master on the one hand and a gateway to the transcendence of material form on the other. While geographers have been rightly critical of these simplistic binaries of human/technical, there remains a common habit of referring to technologically mediated activities as somehow extra-spatial, as virtual, in contra-distinction to a ‘real’.

I have argued recently in Progress in Human Geography that, rather than propagating a peculiar human exceptionalism, we can understand ‘the human’ and technology as existing in a co-constitutive relation that can be called ‘technicity‘. There isn’t the one without the other. So, for example, rather than appeal to an amorphous alternate realm from which digital technologies draw their agency, we can instead study the particular spatial formations in which agency and technicity are generated.

Geography is, in this way, well placed to inform and enhance social scientific research concerning digital technologies, particularly in relation to the articulation of spatial experience and knowledge.