Algorithmic catastrophe–the revenge of contingency, new paper by @digital_objects

A new issue of Parrhesia is out and it contains a new paper from Yuk Hui.

 

All catastrophes are algorithmic, even the natural ones, when we consider the universe to be governed by regular and automated laws of motion and principles of emergence.

It begins with this bold statement qualified through a reading of Aristotle that leads us to:

But these material, technological catastrophes are not examples of what I am proposing here to call algorithmic catastrophes. Algorithmic catastrophe doesn’t refer to material failure, but rather to the failure of reason.

And it is this form of catastrophe to which the discussion tends. Through a reading of the work of Bernard Steigler, in relation to Martha Nussbaum and Plato/Socrates, Hui argues that the history of Occidental philosophy has the techno-logical ‘accident’:

This resonates with the two senses of accident that we have explained above: on one hand, the revelation of substance through accidents, meaning the accidents become necessary; on the other hand, the overcoming of the irrational through reason.

Hui then expertly dissects understandings of accident and contingency in relation to what is thought of as ‘automatic’, which leads to this lovely passage:

As an engineer and designer, one has to be assured that it is normal to have a catastrophe. If catastrophe is thus anticipated and becomes a principle of operation, it no longer plays the role it did with the laws of nature. This use of anticipation to overcome catastrophes can never be completed, however, and indeed accident expresses itself in a second level of contingency generated by the machines’ own operations. Herein also lies the second difference between the algorithmic contingency and the contingency of laws of nature, which we would like to approach in the next section. It doesn’t mean that the algorithm itself is not perfect, but rather that the complexity it produces overwhelms the simplicity and clarity of algorithmic thinking. This necessity of contingency takes a different form from the necessity in tragedy and in nature…

Automisation then becomes the target of deconstruction, with a haunting of Virilio, explicated through the ‘Flash Crash’. Nevertheless, the tendency here is towards automation that exceeds the human capacity to react, as Hui has it:

The automation of machines will be much faster than human intelligence, and hence will lead to a temporal gap in terms of operation. The gap can produce disastrous effects since the human is always too late, and machines won’t stop on their own. In face of our inability to fully understand the causality, Wiener warns us that “if we adhere simply to the creed of the scientist, that an incomplete knowledge of the world and of ourselves is better than no knowledge, we can still by no means always justify the naive assumption that the faster we rush ahead to employ the new powers for action which are opened up to us, the better it will be.”

The paper moves on to consider a speculative aesthetics of the accident, through a reading of Meillassoux’s ‘speculative realism’. Just as Meillassoux attempts to reach back beyond the ‘ancestrally’ of the human, Hui argues that we are challenged by atomisation, in the figure of algorithms:

exteriorized reasons, where we find more and more that human reason is becoming less and less capable of understanding the system that it has succeeded in constructing.

Thus:

In the digital age, accidents in both senses come to the fore and beyond, as indicated by the contingency, the unknown, which also comes to the front.

The algorithmic catastrophe also resonates with current research on speculative reason, especially what Meillasoux proposes as the absolutization of contingency, which reinvents the metaphysical concept of contingency as necessity while it renounces the subjectivist approach towards knowledge. The celebration of speculative reason seems to be an appropriation of the catastrophic aesthetics of our time, where the unknown and black box become the sole explanations.

It is certainly an interesting, if dense, article and probably requires some knowledge of

I am left wondering, as a less-sophisticated non-philosopher, how one might square this argument with technics as the ‘horizon of all possibility to come and all possibility of a future’, pace Stiegler in Technics and Time – the computational ‘transindividuations’ (the becomings of trans-individual assemblages) that we initiate and cultivate through digital ‘industry’ may begin to probe their way into possibilities outside of our sensory or conscious capacities but they remain, at present, limited precisely by their foundation in a human phenomenological domain. Nevertheless, as Hui argues in his final paragraph:

it would be ignorant to just dismiss the algorithmic catastrophe as something from science fiction. The words of the physicists [Hawking et al. warning about the risks of AI] also remind us of Book III of Plato’s Republic, where the physicians return as guardians of the polis. Should these guardians be scientifically well-trained philosophers or philosophically trained physicians is not a question without importance, since it means a new pedagogical program and a new conception of responsibility. Beyond the reach of this single article, what Virilio proposes as a rethinking of responsibility remains largely undiscussed.

I probably don’t know enough of the references Hui is drawing upon to be able to offer a cogent response to this, but it is a very interesting article and worth a read [it is open access!].

‘Escaping the Anthropocene’, Stiegler on the Anthropocene

Following on from the various discussions and reflections on geography’s disciplinary mobilisation of the idea/concept of ‘the anthropocene’ I thought I’d just link to a bunch of things Bernard Stiegler has written about/in response to the concept.

I cannot pretend to be totally on top of this work. Likewise, I am not totally sure I can buy into the argument Stiegler presents in these works but it is relatively interesting nonetheless…

These are all translated and made available by the inimitable Daniel Ross:

The Anthropocene and Neganthropology“, lecture by Bernard Stiegler at Canterbury, 2014.

Escaping the Anthropocene“, lecture by Bernard Stiegler at the University of Durham, January 2015.

Automatic Society 1: The Future of Work – Introduction“, the Introduction to Stiegler’s latest(?) book Automatic Society 1: The Future of Work (another series apparently!). This has been published in the new journal La Deleuziana.

 

Lessons in automation from Looney Tunes and Tom & Jerry

I’ve been idly thinking about the automation of everyday life and the kinds of vision of a future that have been represented (repeatedly) by technology makers, not least in terms of ‘ubiquitous computing‘, and been thinking about the rich history of this from Worlds Fairs:

New York World's Fair 1939 -

And the forms of spatial imagination that were inherent within these visions of the future were leapt upon in popular culture, especially in cartoons. What I mean is: the ways in which the relationships between  people, buildings, locations, things and the sorts of ways of living that thereby emerge form a kind of vernacular for understanding possible futures, which are grounded in (broadly) speculative extrapolations of the experience of space and place.

From the obvious ‘space age’ Jetsons, to the less obvious reinterpretations of technologies of convenience (using Dinosaurs and stone tools) in the Flintstones, there was a lot of co-opting of the futures being sold by the likes of General Motors in their “Futurama” exhibit at the 1939 New York World’s Fair and Disney’s “Epcot“.

And so we get a Tom and Jerry cartoon that resonates with contemporary stories of the loss of jobs to automata – with ‘Thomas’ replaced by a robotic cat:

Another famous GM depiction of an automated home that draws on the forms of spatial imaginary of futurama and its ilk is the film “Design for Dreaming”, in which Thelma Tadlock guides us through the kitsch depictions of a cars and the kitchen of the future.

It is interesting (and heartening) that at the same time you get a satirised version of this with Daffy Duck and Elmer Fudd in the Looney Tunes cartoon “Design for Leaving”, in which (of course) everything goes wrong [and in this vision there’s no Robert DeNiro plumber to fix it]:


Elmer Fudd – Design for Leaving by DwightFrye

These cartoons are somewhat emblematic of a satirising of visions of the future, almost contemporaneously, which is probably rather healthy. We (academics, anyway) have a tendency to take these future visions rather seriously and in many cases it is justified – ways of relating a future have politics and they do particular kinds of political work. Nevertheless, it seems like I (and possibly others) have missed a trick in not paying a little more attention to how popular culture addresses these lofty visions of an automated future everyday life.

I wonder if we might count the recent darkly humorous ‘speculative fictions’ Black Mirror, by Charlie Brooker in this tradition. For example, if we have the vision of a day with “Glass” by Google:

Then we have the accompanying tale of obsessive and creepy behaviour such forms of life-logging and AR might produce with “The Entire History of You”, the third episode of Black Mirror:

These of course stray from the topic of automation but they serve to illustrate the power of satire in relation to visions of the future. It would be great to see more cartoons that comically critique and reimagine the kinds of stories we are being currently told about a future of automated everyday life.

Incidentally, I’ll be touching on some of this in a ‘paper’ (talk) I am giving at the RGS-IBG annual conference this year (in Exeter!), as part of the ‘Algorithmic Practices‘ sessions.

JOB> 0.5 Research Fellow, The Automation of Everyday Life (UWE Bristol)

The Digital Cultures Research Centre is advertising for a part-time (0.5) Research Fellow to support a project led by Dr Patrick Crogan on ‘The Automation of Everyday Life’:

The Digital Cultures Research Centre is seeking a Research Fellow in Digital Cultures: The Automation of Everyday Life on a part time (50%) and fixed term basis until 31 August 2016.

From software agents, predictive systems and recommendation services, to robotics, drones, and artificial companions, automated text and design experiments, and on to the internet of things, the automation of choices, actions and production is already a crucial theme in the emergence of 21st century digital culture.

In this role you will play a key part in developing the DCRC’s research and knowledge exchange programme in the area of the automation of cultural production and practices.

You will engage with a significant expansion of DCRC activities in this rapidly emerging area. Working across the DCRC’s local community, through regional, national and international connections, the post holder will develop interdisciplinary dialogues and partnerships with researchers and digital producers on projects that address and/or experiment with critical or creative responses to the automation of everyday life. You will help to grow the DCRC network of media, arts, robotics and software researchers and practitioners around this area, supporting and initiating the development of proposals to a variety of external funding bodies.

You will assist on increased DCRC initiatives across this theme, taking a lead in co-ordination and organisational support of externally facing research and knowledge exchange activities with a view to maximising impact. In addition to developing and extending your own specific programme of research, you will be expected to publish at an international level. Acting as a member of the editorial team for the DCRC website, you will play an important part in the promotion and external communication of DCRC research.

As a candidate (I’m guessing) you should probably be theoretically informed and take a look at some of Bernard Stiegler’s recent work on automation (this is probably the impetus for the project). You should probably also take a look at the Pervasive Media Studio to get a sense of where the DCRC operates…

Here’s the full advert with the necessary instructions for application…

Robot-caused death

A robot has tragically caused the death of a VW worker, we learn in today’s news coverage. According to Wikipedia (yes, I know…) this is, quite remarkably, only the third reported industrial robot-caused fatality.

The first two reported deaths of workers caused by manufacturing robots happened in the late 1970s and early 1980s:

Robert Williams was killed in Detroit by a parts recovery robot in the Ford Flatrock Casting Plant (Michigan) in January 1979. A circuit judge in the US found the manufacturer of the equipment criminally liable and ordered Unit Handling Systems USD$10m to the Williams family.

Kenji Urada was killed in Akashi (near Kobe) by an industrial robot in a Kawasaki Heavy Industries plant in July 1981.

There is a temptation to think about some kind of intentionality (however weak) or malign tendency by the machine here, largely because of the word ‘robot’ and because of the detailed imagination of what constitutes the entities we might commonly understand to be referred to by that term. The chatter on Twitter about the sad VW incident certainly seems to invoke the kind of Asimov-style Sci-Fi robotics of I, Robot and the Terminator.

The corollary to this is, of course, that we have become very used to seeking blame located in a single figure of a ‘wrong-doer’. Indeed, we have sought (in the UK) to enshrine this in law after a fashion by holding senior manager personally to account in corporate manslaughter cases. Thus it is unsurprising that TIME magazine report that:

Prosecutors are still deciding whether to bring charges and whom they would pursue.

One can argue, however, that a desire for singular culpability is misconceived. The workers tragically killed in all three of cases were involved in complex assemblages or systems that, even with contemporary safety systems, have contingencies that might lead to undesirable ends. In this way, while the contexts are very different, the worker in today’s robotically enable factory is a distant relation of the 19th century mill worker powerfully described by Friedrich Engels.

Workers both in the 19th century and today have to be alive to the operation of a complex and continent manufacturing system, with many kinds of rules and procedures that circumscribe a place for the human body and its function within industrial manufacturing. If one places their body outside of that circumscribed place, or if the rules of circumscription shift (by accident or design) then the body is at risk of harm.

There may all sorts of factors that lead to a manufacturing accident, but I would hazard that they are more often than not a condition of what it means to work and how that work is designed: how the body fits within the system, or how that system is programmed to accommodate the body, rather than a malign or supernatural robotic agency.

For example: Urada was trapped by the work arm of the robot which pinned him against a machine which cuts gears and was killed. He had entered a prohibited area around the robot to repair it. According to factory officials, a mesh fence around the robot would have shut off the power when unhooked, but instead of opening it, Urada had apparently jumped over the fence. He set the machine on manual control but accidentally brushed against the on-switch, and the claw of the robot pushed him against the machine tooling device.

A grizzly story and one might look at the safety procedures and equipment. There are of course standards and codes of practice for safe working with robots (by the ANSI and the HSE for example). Nevertheless, it remains the case that Urada transgressed those procedures putting himself at risk. We can only speculate about why he did so, but again that is a condition of work, not of robots as such.

All three of the deaths caused by manufacturing robots are tragic and all three have and will no doubt prompt questions about how complex systems of manufacture that may be hazardous to the human body can be made as safe as possible. They could, and perhaps should, also prompt questions about how we work (see, for example, this article in the FT).

New resource for readers of Stiegler

Philosopher Daniel Ross, translator of many of Bernard Stiegler’s books published in English with Polity, has uploaded quite a few his translations of lectures and articles by Stiegler to his academia.edu page.

If you’re interested in Steigler’s more recent work, concerning automation and the anthropocene this is a valuable resource.

See, in particular:

So – a public big THANK YOU (from me) to Dan for sharing!!

Bernard Stiegler – Digital shadows and (en)light(enment)

The translation below is the second half of the “Net Blues” interview with Bernard Stiegler conducted by the Le Monde blog “Lois des réseaux” [Laws of the networks].

In this second-half of the interview Stiegler discusses how the web should evolve developing the trope of ‘enlightenment’ (which he has significantly discussed in Taking care of youth and the generations), drawing out the play on words between light and shadow and ‘the Enlightenment’. Highlighting the web as the latest stage in publishing technologies (which have historically been central to political movements), Stiegler argues a new industrial politics must be developed, by Europe, as the ‘curative’ counter to the ‘toxic’ trend towards automation and homogeneity brought about by computation. This is the ‘pharmacological’ character of the internet Stiegler discusses in the ‘Net Blues‘. The new industrial politics Stiegler argues for has universities and the production of knowledge at its heart.

As usual clarifications or questions over the translation of a particular word are in square brackets and all emphasis is in the original text.

Digital shadows and (en)light(enment)

What do you think needs to be happen for the web to evolve?

Bernard Stiegler: I think that the web today is an entropic system. It first appeared, for most of us, not least for myself, as a negentropic opportunity, that is, as a new capacity for diversification, especially because it has allowed people to begin to differentiate between different kinds and uses of media [parce qu’il a permis de démassifier les médias]. Today, major newspapers, such as Le Monde and many others, allow all kinds of actors to convene around the newspaper, collecting together a very diverse range of views. This is an opportunity and it has given the sense of a kind of renaissance, following the development of an extreme consumerism within the mass media and the culture industries becoming an undifferentiated mass [qui avait été une sorte de laminoir] – especially in the last twenty years. Media such as television deteriorated terribly. All of which was related to an economic and industrial model that is now in decline.

Nevertheless, the web initially appeared as an opportunity for negentropy, that is to say of diversification. However, what we’ve discovered in recent years, in particular since we began to speak of the ‘Big Four‘, is the extraordinary hegemony of global giants who have gained an unprecedented prescriptive power over behaviour.

Only the United States takes full advantage of this economy of data. The data economy, which is destroying European national taxation and increasingly deprives public authorities of their capabilities to act, is based on a generalized calculability which is at odds with the negentropic promise of the web. This calculability tends towards the reintroduction of the law of the audience ratings. The page ranking performed by Google’s algorithm is a very specific, surgical form of audience rating, which is very efficient and very refined, but like television audience rating leads to the transformation of singularities into peculiarities, i.e. computable items, because it homogenises and de-singularises. Unlike the individual, the singular is incalculable. In this case, we have a new entropy process that we feel threatens languages. This is how I interpret Frederic Kaplan’s analysis of what he calls “linguistic capitalism.”

However, this predicament is not legislated, and it could be entirely different: the web in the service of an intensification of “individuation”, in other words singularities, is the future–and it is the future of Europe. Given the present circumstances, the web would totally horizontalise and level out information, because it requires processes of computation based on the removal of any lack of transparency.

As I said earlier, the web and the internet are publishing techonologies. Publishing technologies are the origin of what Plato calls πολιτεία (Politeia, Ed.) that we translate as res publica in Latin. Why do we translate it as res publica? Because Politeia constitutes a “public good”, as we discussed earlier. And the establishment of public goods requires publication technologies, which for the Greeks was writing. Marcel Detienne has shown that the Athenian city is like a vast typewriter that has wrote into the marbel of the walls of the city. After each decision of the Βουλή (boulè [parliament], Ed.) the publishers of the law wield the hammer and chisel to carve the decision in stone. This process of publication created the public right of citizens to criticise the law. One enters the politeia, citizenship, and [so] paves the way to democracy. Today, with digital computation, a whole new publication system is in place.

However, we always claim that the state of political rights, and the accompanying reason, is what the Greeks called λόγος (logos, Ed.). However, this assumes that such processes of publication enables disagreement, the publication of contradictory arguments, which we call public debate, and which is a fundamental rule of all rational knowledge. The promise of the web was to revive public, political, scientific or aesthetic debate. But this promise has not been kept. From the moment Google, Amazon and such companies had to make a profit from all of this, they became totally interested in equalising and levelling out data in order to exploit it with algorithms, crushing disagreement rather than enabling traceability and widespread intelligibility. I do not think this situation can last. The basis of knowledge, in all its forms – I’m not just talking about theoretical knowledge, its also true of ‘know how’ [savoir-faire] and life skills [savoir-vivre] – is grounded in a fundamental diversification, which when [knowledge] ceases, leaves behind dead knowledge – like dead languages and towns transformed into museums [villes muséifiées]. If the sciences and knowledge are founded on publication processes, the development of the digital is a radical transformation of knowledge, and in particular of academic knowledge. The power of Europe and the West is founded on power over knowledge. The emperor Frederic Barberousse, who, in opposition to the Pope, granted freedom to the University of Bologna in the 13th Century, initiated a process followed by Oxford, the Sorbonne and Cambridge. It is not the Conquistadors and the caravels that are the primary origin of the centuries-long global domination of the West: it is the reliance upon universities. This clearly evolves in new directions with the appearance of new devices for printing – and the cognitive as well as spiritual revolution brought about by Luther is clearly a consequence of such publishing technologies, by which Luther makes reading the scriptures for oneself the heart of his struggle. Along with the Counter Reformation, this leads to the foundation of the Jesuit Schools and the Jesuits evangelising around the world through their missions, which constitutes a fundamental aspect of the Enlightenment project, and this, with Condorcet[1] and the French Revolution, leads to Jules Ferry[2] via Guizot[3].

Obviously, the web, and the digital more generally, totally reconfigures these maps from top to bottom – not only the maps for teaching but the conditions of scientific research and the life of the mind in all its forms. Europe should not fear this, even less so since it is the origin of the concepts of the web and HTML, in which, in France, CNET [The National Centre for Telecommunication Studies] (which has been destroyed by irresponsible policies) played an important role in the design of the ATM and GSM networks. Europe has played an extremely important role in the configuration of all technical systems, but has failed to make this common knowledge [? elle n’a pas su le socialiser] because European political and economic actors are often blind to such issues. Thus, when researchers and scientists are daring and inventive they have found themselves confined to an imitation of the ‘American model’, which is a disaster for a Europe that is totally devoid of an industrial strategy, and condemned to a ‘downgrading’, at the height of the challenges of our times.

What we call the Enlightenment emerged in Europe, and it was produced by the republic of letters resulting from the printing press. We are no longer in the era of pure Enlightenment: we have entered an era of Shadow and Enlightenment [the word “Lumières” here is used as a play on words between light and enlightenment]. It is an era of a pharmacological consciousness of what pushes the speed of technology, the speed of light, also causes the shadows of the ‘toxicity’ of the digital, which necessarily accompany their own ‘cure’ [sa «curativité»]. A new industrial politics should be supported by Europe and must be based on a curative politics and economics of the digital, deliberatively and rationally battling against its toxicity. After Edward Snowden’s revelations [discussed in more detail in ‘the Net Blues‘] every citizen is aware of this huge problem that puts the future gravely at risk [un immense problème qui hypothèque très gravement l’avenir].

Europe should unite around a project for a new Enlightenment that is at once scientific, philosophical, industrial, and economic; which fully seizes the immense challenges brought by computation [la numérique]. Such a politics should be based upon an unprecedented use of universities and research organisations. The very nature of knowledge is destabilised by the digital. For example, to work in the nanosciences today means working with digital artifacts to produce nanoscale phenomena, that is to say, at the quantum level. These are not actually digital phenomena, i.e. objects of intuition, but what Kant called the “noumena”, that is to say the objects of understanding and reason. However, these are objects that are, at the same time, completely constituted in nanophysics, and are the objects of scientific experiments, by being simulated and mathematically modelled using computers. Genetic biology is made possible today by biostations, that is: by informational calculations made on very large amounts of data. The digital alters the practices of mathematicians, but Frederic Kaplan has shown it also modifies the development of languages. [Furthermore] geography has become fundamentally linked to geographical information systems, as the GPS standard has been socialised within our everyday lives. The structures of digitisation are transforming all knowledge, including know-how [les savoir faires] and life skills [les savoir vivre].

Faced with such a universal upheaval it is essential to reconfigure all academic research and to organise new links between universities and school curriculums [les pratiques scolaires] so that the digital enters schools on a rational basis, not through the stories told [storytelling] by economic actors advocating a legitimation of their own models [of the digital]. This model [of working] should be analysed, critiqued and continuously improved, for that is [the practice of] reason. Taking such a critique to the global level, Europe could reconstruct a digital industry which is currently tragically lacking. It is not only children but also parents, and elected officials, that need to be acculturated [to such changes] through schools, so that European society can be deeply reconfigured, and take with it a new model of the web.

Notes

1. In the 18th century, the Marquis de Condorcet developed a voting tally system to select the candidate who would beat each of the other candidates in a run-off election.

2. Jules Ferry was a 19th century republican who promoted laicism and French colonial expansion.

3. François Guizot was a prominent 19th century statesman who significantly promoted education.

Algorithmic Practices: Emergent interoperability in the everyday

This year the RGS-IBG Annual Conference will be coming to the University of Exeter, my institution, and there are some exciting sessions already in the pipelines. I wanted to bring one to the attention of those that happen to look at this website:

Algorithmic Practices: Emergent interoperability in the everyday

Sponsored by: the History and Philosophy of Geography Study Group
Convened by: Eric Laurier and Chris Speed (Edinburgh) and Monika Buscher (Lancaster)

An ever-increasing proportion of the interactions that we have with digital platforms, apps and devices are mediated according to complex algorithms. Whether it be the real time analytics that draw us into playing a game on our phone, or tailored recommendations built from our historical searching and buying habits, we structure our daily lives in response to ‘performative infrastructures’ (Thrift, 2005: 224), most of them hidden deliberately by their makers.

Yet, in responding to the summons, the predictions, the recommendations, the help, the calculations that occur as platforms try to anticipate our next actions, we are learning how they work and don’t work. In our ad hoc assemblies of devices, apps and screens we short cut and re-make algorithms. For instance, in disaster response, ad hoc interoperability and agile response are creating incentives for ‘systems of systems’ that allow locally accomplished convergence of diverse information systems, with implications for data surge capacity as well as protection and privacy (Mendonça et al 2007).

Described as “a structured approach to real-time mixing and matching of diverse ICTs to support individuals and organizations in undertaking response”, emergent interoperability maybe becoming common place in less dramatic daily practices as individuals negotiate the range of algorithms that “react and reorganize themselves around the users” (Beer 2009).

This panel invites papers and presentations that provide insight into conditions and settings in which emergent operability and interoperability occurs within society.

Areas of interest:

  • Dashboards, decongestion, security and the other promises of smart cities (Kitchin 2014
  • Wayfaring, wayfinding and other mobilities with map apps (Brighenti 2012)
  • Changing forms of participation and collaboration through social media algorithms
  • Responding to and attempting to manipulate predicted actions and recommendations
  • The production of calculated publics (Gillespie 2014)
  • Political contestation of algorithms
  • Ad hoc systems
  • Textures of communication in digital and traditional media

The deadline is Friday 6th February 2015.

Email speakers, title and abstract to eric [dot] laurier [at] ed.ac.uk

References:

Beer, D. (2009) Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society. London: SAGE. Vol 11(6): 985–1002.

Brighenti, A. M. (2012). New Media and Urban Motilities: A Territoriologic Point of View. Urban Studies, 49(2), 399–414.

Gillespie, T (2014) The Relevance of Algorithms, pp167-194 in Media Technologies, Essays on Communication, Materiality and Society ed. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Cambridge, MA: MIT Press.

Kitchin, R. (2014). Thinking critically about and researching algorithms. The Programmable City Working Paper 5, Maynooth

Mendonça, D., Jefferson, T., & Harrald, J. (2007). Emergent Interoperability: Collaborative Adhocracies and Mix and Match Technologies in Emergency Management. Communications of the ACM, 50(3), 44-9.

Thrift, N. (2005) Knowing Capitalism. London: Sage.