All catastrophes are algorithmic, even the natural ones, when we consider the universe to be governed by regular and automated laws of motion and principles of emergence.
It begins with this bold statement qualified through a reading of Aristotle that leads us to:
But these material, technological catastrophes are not examples of what I am proposing here to call algorithmic catastrophes. Algorithmic catastrophe doesn’t refer to material failure, but rather to the failure of reason.
And it is this form of catastrophe to which the discussion tends. Through a reading of the work of Bernard Steigler, in relation to Martha Nussbaum and Plato/Socrates, Hui argues that the history of Occidental philosophy has the techno-logical ‘accident’:
This resonates with the two senses of accident that we have explained above: on one hand, the revelation of substance through accidents, meaning the accidents become necessary; on the other hand, the overcoming of the irrational through reason.
Hui then expertly dissects understandings of accident and contingency in relation to what is thought of as ‘automatic’, which leads to this lovely passage:
As an engineer and designer, one has to be assured that it is normal to have a catastrophe. If catastrophe is thus anticipated and becomes a principle of operation, it no longer plays the role it did with the laws of nature. This use of anticipation to overcome catastrophes can never be completed, however, and indeed accident expresses itself in a second level of contingency generated by the machines’ own operations. Herein also lies the second difference between the algorithmic contingency and the contingency of laws of nature, which we would like to approach in the next section. It doesn’t mean that the algorithm itself is not perfect, but rather that the complexity it produces overwhelms the simplicity and clarity of algorithmic thinking. This necessity of contingency takes a different form from the necessity in tragedy and in nature…
Automisation then becomes the target of deconstruction, with a haunting of Virilio, explicated through the ‘Flash Crash’. Nevertheless, the tendency here is towards automation that exceeds the human capacity to react, as Hui has it:
The automation of machines will be much faster than human intelligence, and hence will lead to a temporal gap in terms of operation. The gap can produce disastrous effects since the human is always too late, and machines won’t stop on their own. In face of our inability to fully understand the causality, Wiener warns us that “if we adhere simply to the creed of the scientist, that an incomplete knowledge of the world and of ourselves is better than no knowledge, we can still by no means always justify the naive assumption that the faster we rush ahead to employ the new powers for action which are opened up to us, the better it will be.”
The paper moves on to consider a speculative aesthetics of the accident, through a reading of Meillassoux’s ‘speculative realism’. Just as Meillassoux attempts to reach back beyond the ‘ancestrally’ of the human, Hui argues that we are challenged by atomisation, in the figure of algorithms:
exteriorized reasons, where we find more and more that human reason is becoming less and less capable of understanding the system that it has succeeded in constructing.
In the digital age, accidents in both senses come to the fore and beyond, as indicated by the contingency, the unknown, which also comes to the front.
The algorithmic catastrophe also resonates with current research on speculative reason, especially what Meillasoux proposes as the absolutization of contingency, which reinvents the metaphysical concept of contingency as necessity while it renounces the subjectivist approach towards knowledge. The celebration of speculative reason seems to be an appropriation of the catastrophic aesthetics of our time, where the unknown and black box become the sole explanations.
It is certainly an interesting, if dense, article and probably requires some knowledge of
I am left wondering, as a less-sophisticated non-philosopher, how one might square this argument with technics as the ‘horizon of all possibility to come and all possibility of a future’, pace Stiegler in Technics and Time – the computational ‘transindividuations’ (the becomings of trans-individual assemblages) that we initiate and cultivate through digital ‘industry’ may begin to probe their way into possibilities outside of our sensory or conscious capacities but they remain, at present, limited precisely by their foundation in a human phenomenological domain. Nevertheless, as Hui argues in his final paragraph:
it would be ignorant to just dismiss the algorithmic catastrophe as something from science fiction. The words of the physicists [Hawking et al. warning about the risks of AI] also remind us of Book III of Plato’s Republic, where the physicians return as guardians of the polis. Should these guardians be scientifically well-trained philosophers or philosophically trained physicians is not a question without importance, since it means a new pedagogical program and a new conception of responsibility. Beyond the reach of this single article, what Virilio proposes as a rethinking of responsibility remains largely undiscussed.
I probably don’t know enough of the references Hui is drawing upon to be able to offer a cogent response to this, but it is a very interesting article and worth a read [it is open access!].