I’ll be giving two talks in the next week that both address the ways claims are made about ‘algorithms’, in terms of what they are and what they can do. I frame this in terms of an ‘algorithmic imaginary‘.
The first talk is titled “Prosethetic stupidity, or world-ing by numbers” (more accurately, following Derrida, this should be ‘prothétatique‘, but that’s far too enigmatic!), and is short. I’ll be taking part in the launch event for the University of Bristol’s MSc Society & Space blog, and discussing how the sociotechnical assemblages we name as algorithms perform a kind of world-ing, rather than reflect a world, which, of course, has significant implications in the growth of ‘big data’–driven computational social science.
The second talk, entitled “An algorithmic imaginary: anticipation and stupidity”, is longer. This is an invited seminar, and a part of the University of Oxford Institute for Science, Innovation and Society’s “Anthropological Approaches to Technics” seminar series.
In both talks I’m playing around with co-opting the idea of an ‘imaginary‘ (in the vein of ‘geographical’ or ‘sociological’ imaginaries) to offer a critical reading of how particular stories about automation and agency are taking hold. I acknowledge others have already employed the term (e.g. Bucher, Mager), but I think I’m offering a novel definition of my own here… In the talks I frame this in two ways: in terms of ‘anticipation’ and in terms of ‘stupidity’ (there is more detail in the written piece I’m developing on the back of the talks).
Firstly, the phenomena labelled ‘algorithms’ are suggested to anticipate the activities of people, organisations and (other) mechanisms. This is one of the substantive claims of ‘big data’ analytics in relation to any form of ‘social’ data, for example. It is certainly true that, building on ever-larger datastores, software (with its programmers, users etc. etc.) have a capacity to make certain kinds of prediction. Nevertheless, and as many have pointed out, these are predictions based upon a model (derived from data) that I argue constitutes a world (it does not reflect the world – these predictions are ontogenetic, calling entities/relations into being, rather than descriptive).
Further, precisely because these anticipatory mechanisms are often a part of systems that use their outputs in order to select what may be seen, or not, and thus what may be acted upon, or not, they are arguably a form of self–fullfiling prophecy. The anticipation is ‘proven’ accurate precisely because it functions within a context where the data and its structures (the model) are geared towards their efficient calculation by the ‘algorithm’. Thus, we might choose to be more cautious about the claims of large social media experiments that are focused on a single platform, precisely because they are self-validating. A social media platform is a world unto itself, not a reflection of ‘reality’ (and whatever we choose that to mean). Indeed, it has been highlighted by others (Mackenzie 2005, Kitchin 2014) that the outcomes of ‘algorithms’ can be unexpected in terms of their work in world-ing.
Yet, the supposition of such an anticipation is, itself, a form of anticipation – a kind of imagining of agency. The capacity to ‘predict’ is suggested to have effects, and those effects produce particular kinds of experience, or spaces. Visions of a world are conjured with what we imagine ‘algorithms’ can do. Thus it is a double-bind of anticipation: to write anticipatory programmes, a programmer must imagine what kinds of things the programme can/should anticipate. There is accordingly a geographical imaginary of anticipatory systems. Furthermore, that imaginary is becoming normative – in two senses: normative or prescriptive in the sense of the double-bind just mentioned; and normative, in the Wittgensteinian sense, such that such an imaginary becomes the criteria by which we judge each other as to whether how and what we say about something (e.g. ‘algorithms’) is appropriate, or not, to the context of discussion.
Secondly, ‘Algorithms’, as socio-technical apparatuses, can, if we allow, act as a mirror in which we might reflect upon the generation and use of sets of rules, and how they are followed[1]. In order for contingencies to be made, the anticipatory ‘world-ing’ of the programmer must be complex (and a form of catastrophism – always planning for the potential error or breakdown). Such a reflection upon ‘algorithms’ is, in effect, a reflection upon reason and stupidity. For the purposes of this post, I identify two elements to this reflection: the reification of the apparatus we call ‘algorithms’; and the idiomaticity and untranslatability of language in terms of the conventions of programming ‘code’.
Much of the recent discourse of ‘algorithms’ invites, or even assumes, a belief in the validity and sovereignty of the black-boxed system named an ‘algorithm’. The ‘algorithm’ is reportedly capable of extraordinary and perhaps fear-inducing feats. We are often directed to focus upon the apparent agencies of the code as such, perhaps ignoring the context of practices in which the ‘algorithm’ is situated: practices of ‘coding’, ‘compiling’ (perhaps), ‘designing’, ‘managing’, ‘running’ and may others that involve the negotiation of different rationales for how and why the ‘algorithm’ can and should function. There is nothing in-and-of-itself “bad” about the apparently hidden agencies of an ‘algorithm’ – although, of course, sometimes questionable activities are enabled by such secrecy – and focusing upon that hidden-ness elides those contexts of practice[2].
By ‘reifying’ (following Adorno and Horkheimer 2002; Stiegler 2015) the black-boxed ‘algorithm’ we submit to a form of stupidity. We allow those practitioners that enable the development and functioning of the ‘algorithm’, and ourselves as critical observers, to “vanish[…] before the apparatus” (Adorno and Horkheimer 2002, xvii). This is inherently an act of positioning ourselves in some kind of peculiarly subordinate relation to the apparatus, it is a debasement of our theoretical knowledge (because, of course, we understand the context of practices, we understand the kinds of ‘world-ing’ discussed above), and of our critical ‘know-how’. Such a ‘stupidity’ is a tendency towards an incapacity; an incapability to meet the future–deferring instead to the calculative capacities of the apparatus, and its (arguably) impoverished world-ing.
A suitable humorous example of this is, of course, David Walliams’ character Carol Beer who, in the sketch comedy programme Little Britain, has a blithe and unbending deference to the computer – which simply “says no”. The whole premise of the joke Walliams presents with that catchphrase is that, of course, there should be room for interpretation and yet we are presented with a blind adherence to the results of the programme – which is patently stupid. Nevertheless, in many moments of everyday life we are faced with such forms of adherence to nonsensical outputs from software – we may even feel compelled to be complicit. Following Stiegler (2015), we might recognise that a part of what makes this funny is that a moment of stupidity (i.e. its recognition) is also a moment of shame: if we value reason and independent thought, Walliams’ character should feel ashamed of her ‘stupidity’ – as Stiegler (2015, 46) says: “a stupidity such that I perceive my own being stupid”. This is not (normatively) a “bad” thing: how else can one become the person we desire to be (or ‘individuate’) than by recognising our own ‘stupidity'[3]? In this light, stupidity cannot be opposed to knowledge, neither is this a ‘stupidity’ that is necessarily forced upon us. Reflecting upon stupidity is always a reflection upon my own stupidity, it is a means of thinking the passage to knowledge. Crucially, we realise it only in retrospect. If we are to take such a critical understanding of stupidity seriously, we are therefore called to urgently attend to the ‘reification’ of what we name ‘algorithms’ and the knowledge claims that are made on the back of the suppositions we accordingly make about their operation.
It is possible to be convinced that the reductive forms of language, formulated through formal logic, that constitute a ‘programming language’ cannot be idiomatic or open to difficult forms of interpretation. Yet (of course), they are – and this is an issue of translation. A programme must be (however minimally) written and read, with the rules of such activities agreed upon (an architypically ‘normative’ operation). Yet the range and scope of contexts in which such reading and writing must function are very broad. There is always some ambiguity in the interpretive function of ‘reading’, or more accurately ‘translation’ (translation as the execution of the code, the way it is ‘compiled’ [into binary] or ‘interpreted’ by another layer of software, or its transposition into another codebase, for example through an API). Indeed, as Kitchin (2014) points out, there is an issue of translating a problem (in the mathematical sense) into a structured formula and thence into some form of software programme. Likewise, in software systems the software itself makes no sense without some kind of translation of data, either in formats or units of data, a prime example of this might be the ‘stupidity’ that led to the failure of NASA’s Mars Climate Orbiter, or expected forms of data, with a good example being the ‘stupid’ assumptions about what kinds of interaction and thus kinds of data the Microsoft “Tay” Twitter bot would end up ‘learning’ from (a form of stupidity on the part of the Microsoft programmers who failed to recognise and negotiate normative contextual issues).
An ‘algorithmic imaginary’, as briefly outlined here, has become normalised in our discussions of how computation and software play an increasingly significant part in various aspects and processes in our lives, which always take place within systems (sociotechnical assemblages). The challenge of such an algorithmic imaginary is that it is, sadly, couched in either dystopian and defeatist or blandly a-political terms: we are either doomed to ‘welcome our new algorithmic overlords’ (to paraphrase Kent Brockman) or invited to sink into the stupor of a standardised shiny surface of our lives being made progressively ‘easier’ by apps, gadgets and so on. We need not ‘believe’ in the world-ings of ‘algorithms’ or reify the precarious achievements of software. Even when intentions are noble, what is done as ‘social science’ under the umbrellas of ‘big data’ is in danger of eliding more than it apparently reveals . We can and should instead look to our critical toolbox and examine the contexts of practice of ‘algorithms’ and the systems of which they are a part, and here we already have some excellent resources (see: Gillespie, Kitchin, Mackenzie, Miller, Seaver). We must forge alternate, diverse and resolutely political sociotechnical imaginaries, and hone our capacities to intervene – even at the level of code; for one of the most important things we can do “in this life that must constantly be critiqued in order for it to be, in fact, worth living – is the struggle against stupidity” (Stiegler 2013, 132).
Notes
- Even if we are considering complex forms of ‘machine learning’ there are always foundational rules set within the software platform or the hardware systems, and indeed the choice of ‘training data’ that reflect particular forms of decision-making.
- One might think of this through the lens of the ‘pharmakon’: what we call ‘algorithms’ are both a support to structures that increase our capacities (to support forms of individuation) but also carry the potential (sometimes actualised) to harm our capacities (leading to disindividuation, and thus to ‘baseness’ [bêtise]) – “That which is pharmacological is always dedicated to uncertainty and ambiguity, and thus the prosethetic being is both ludic and melancholy” (Stiegler 2013, 25)
- Both Jacques Derrida (2009) and Bernard Stiegler (2013, 2015) identify, following Deleuze (1994), the question of how ‘stupidity’ is possible as one that is transcendental:“If we are stupid it is because individuals individuate themselves only on the basis of preindividual funds (or grounds) from which they can never break free; from out of which they can individuate themselves, but within which they can also get stuck, bogged down, that is, disindividuate themselves” (Stiegler 2015, 46).For Derrida, again following Deleuze, such a ‘ground’ is a ‘groundless ground’ [fond sans fond] (Derrida 2009) insofar as
“Stupidity is neither the ground nor the individual, but rather this relation in which individuation brings the ground to the surface without being able to give it form” (Deleuze 1994, 151).This ‘groundless ground’, or baseness, can be forged from knowledge that has become ‘well known’ (akin to the Wittgensteinian normative) yet remains ‘susceptible to regression’ (Stiegler 2015, 47). Indeed, it is this (‘pharmacological’) negotiation of tendencies that fuels Stiegler’s use of ‘entropy’ as a trope.
Some references
Adorno, T and Horkheimer, M 2002 The Dialectic of Enlightenment, Stanford University Press, Stanford.
Deleuze, G. 1994 Difference and Repetition, trans. Patton, P. Columbia University Press, New York.
Derrida, J. 2009 The Beast and the Sovereign: Volume I, trans. Bennington et al., University of Chicago Press, Chicago.
Kitchin, R. 2014 “Thinking critically about and researching algorithms”, The Programmable City Working Paper 5, pp. 1-29.
Mackenzie, A. 2005 “The performativity of code: software and cultures of circulation”, Theory, Culture and Society 22(1): pp. 71-92.
Miller, D. 2015 Social Media in an English Village, (Why We Post Series), UCL Press, London.
Stiegler, B. 2013 What makes life worth living: On pharmacology, trans. Ross, D., Polity, Cambridge.
Stiegler, B. 2015 States of Shock. Stupidity and Knowledge in the 21st Century, trans. Ross, D., Polity, Cambridge.
One Reply to “Upcoming talks: ‘an algorithmic imaginary’”