Bernard Stiegler: “Rethinking an industrial policy in the era of the Anthropocene and automation”? [translated]

A young man standing in a cloud of yellow smoke

I recently came across an edited interview with Bernard Stiegler published on the website of Philosophie Magazine (17/12/18) [a] in which Stiegler ties together a very brief reading of the ‘yellow vests’ phenomena with the experiments he has been leading in the creation of an ‘economy of contribution’ – a more-or-less as a ethico-political-economic response to the ‘Anthropocene’. It is important to note here that for Stiegler not only means the current global cultural/environmental/social crisis embodied in a new ‘epoch’ but also significantly means the apparently rapid changes in employment/work largely due to technology. I have translated conversations with Stiegler about this topic before and these might be helpful in fleshing out the argument translated below, especially:

Here, in a similar vein to the discussion of previous periods of civil unrest in France (see in particular the books: The Decadence of Industrial Democracy, Uncontrollable Societies of Disaffected Individuals and The Lost Spirit of Capitalism) Stiegler diagnoses a form of immiseration that comes from a loss of capacities that needs to be addressed through a form of therapeutic response. The ‘yellow vests’ movements are a symptom of a broader cultural-environmental-social ‘entropy’ that is ‘The Anthropocene’ needs to be addressed through a re-imagined industrial policy – to engage in what he terms a form of ‘negentropy’. having said all of this, what is important perhaps about this brief interview is that it locates pragmatic action by talking through what Stiegler and colleagues are doing in the Plaine Commune experiments (for more information follow the links above).

As I have previously observed, I still find it curious that underlying the apparent radicalism of re-thinking industrial strategy, acting together towards (political) therapeutic ends, is a strange sort of unflinching (dare-I-say even conservative) faith in the state and institutions. In particular, the model for the central strategy of ‘contributory income’ is the intermittent entertainment policy of the French government for subsidising freelance and somewhat precarious forms of work in the ‘creative industries’. I’m not criticising this, I think it merits greater discussion – not least because it is being trialed in Seine-Saint-Denis – but there’s something curious about this rather measured scheme being central to the strategy, given the almost apocalyptic and incredibly urgent tone of books like The Neganthropocene and Age of Disruption.

ADD. 24/01/19. I think I probably missed a final step to the thought expressed in the paragraph above: while the scheme for a ‘contributory income’ (based upon the intermittent scheme) currently underway in Plaine Commune is perhaps limited, and while the idea of such an income is, in-itself not especially ‘revolutionary’, perhaps I/we should see this as the beginning of a reorientation – the instigation of a different/new therapeutic ‘tendency’, in Steigler’s terminology – away from a competitive individualised economic rationale towards a collective means of flourishing together, whilst also acknowledging that we need to take some form of collective responsibility. In that vein, as others have pointed out, Stiegler’s ‘activist’ thought/activities take on a particular ethical/moral stance (in this way I have some sympathy with Alexander Galloway calling Stiegler a ‘moral philosopher’).

As usual I have included in square brackets original French, where I’m unsure of the translation, or clarifications. I have also maintained, in the Conversation piece, all of the original francophone hyperlinks unless there is a clear anglophone alternative.

I welcome comments or corrections!

Notes

a. The interview appears in a section entitled Gilets Jaune, et maintenant– something like ‘Yellow vests, now what?’

Bernard Stiegler: “Rethinking an industrial policy in the era of the Anthropocene and automation”

For this thinker of technics, the “yellow vests” movement highlights the desperate need for a new policy that would value work rather than employment. Among his proposals is the widening of the government scheme for irregular workers in the creative sector to everyone.

I was struck by the rapid evolution of the “yellow vests” movement, by the way it was presented and in which it was perceived. In the beginning, occupations of roundabouts [and crossroads] were reminiscent of the Tea Party phenomenon in the United States, which paved the way for Donald Trump’s election, and of Sarah Palin’s astonishing statement: “I like the smell of exhaust emissions!”However, despite the presence of the “ultra-right” which is of course very dangerous, the rise of this movement has evolved positively – and very unexpectedly. Compared with the “protest” scene, well-known in France for decades, the “yellow vests” are obviously a very singular and very interesting event, beyond its extreme ambiguities. Amongst the demands made by these leaderless demonstrators, the proposal to create a deliberative assembly for ecological transition is particularly illustrative of what fundamentally new emerging from this movement. This is confirmed by the encouraging sign, which must be interpreted without being under any illusions: the protest and climate march at a junction, in Bordeaux, on the 8th of December.

When we listen to the “yellow vests”, we hear the voices of people who are a bit lost, often living in unbearable conditions but with the virtue of expressing and highlighting our contemporary society’s limits and immense contradictions. In the face of this, the Macron government seems unable to take the measure of the problems being raised. I fear that the measures announced by the President on the 11th of December resolve nothing and fix in place the movement for the longer term, precisely because it expresses – at least symptomatically – the collective awareness of the contemporary crisis. The political horizon throughout Europe is not at all pleasant: the extreme right will probably draw the electoral benefits of this anger, while failing to answer the questions legitimately posed by “yellow vests” movement. This highlights the lack of a sense of history by President Macron and his ministers, and equally underlines the vanity of those who pretend to embody the left, who are just as incapable of making even the simplest statement at the height of what is the first great social crisis characterised by the Anthropocene. 

For me, a “man of the left”, the important question is what would be a leftist comprehensive industrial policy to take up the challenges of the Anthropocene and automation – which is to say, also addressing “Artificial Intelligence”. To confront this question is to attempt to overcome what is not thought in Marxian criticism, namely: entropy. All of the complex systems, both biologically and socially, are doomed to differential loss – of energy, biodiversity, interpretation of information – that leads to entropic chaos. The concept of negentropy, taken from the works of Erwin Schrödinger, refers to the ability of the living to postpone the loss of energy by differentiating organically, creating islands and niches locally installing a “différance” (as Derrida said) through which the future [l’avenir] is a bifurcation in an entropic becoming [devenir entropique] in which everything is indifferent. 

The fundamental point here is that, while entropy is observed at the macroscopic level, negentropy only occurs locally through energy conversion in all its forms – including libidinal energy. Freud was, with Bergson, the first to understand this radical change in point of view required by entropy. The “nationalist retreat” is a symptomatic expression of the entropic explosion provoked by the globalization [that is the] Anthropocene. This needs to be addressed by a new economic and industrial policy that systematically values negentropy. 

It is in response to such issues that the Institute of Research and Innovation and Ars Industrialis with Patrick Braouezec (President of the Plaine Commune public territorial establishment) are leading an experiment in Seine-Saint-Denis. In this district of 430,000 we are experimenting with putting in place a local economy of contribution, based upon a new macro-economy at the national level. Above all, this scheme values work rather than employment and aims to generalize the system of intermittent entertainment [added emphasis] [1]: The idea is to be able to guarantee people 70% of their most recent salary in the periods when they do not work, provided that within ten months they begin another freelance [intermittent] job. In the case of freelance [intermittent] performers, they must work for 507 hours, after which they have “replenished their right” to a contributory income. We are currently constructing workshops in the areas of child care, quality urban food, construction and urban trades, the conversion of combustion vehicles into clean vehicles, and so on. This experiment is supported by the Fondation de France, Orange, Dassault Systèmes, Caisse des Dépôts et Consignations, Societe Generale, Afnic Foundation and Emmanuel Faber, General Manager of Danone. Every one of which are stakeholders in the search for a new conception of industrial economy fully mobilized in the fight against the Anthropocene and for the restoration of very-long-term economic solvency, based on investment, not speculation. It is by taking bold initiatives of this kind that we will truly respond to the “yellow vests”.

Notes

1. There is no direct translation for ‘intermittent entertainment’/ ‘intermittents du spectacle’ – this refers to state-subsidised freelance workers in the entertainments industry, an arrangement backed by long-standing legislation in France to support their native creative sectors.

“AI will displace 40 percent of world’s jobs in as soon as 15 years” – Kai-Fu Lee

Industrial factory robot arms

In a widely-trailed CBS ’60 minutes’ interview, the A.I-pioneer-cum-venture-capitalist Kai-Fu Lee makes the sorts of heady predictions about job replacement/displacement that the media like to lap up. The automative imagination of ‘automation as progress’ in full swagger…

We should perhaps see this in the context of, amongst other things, geopolitical machinations (i.e. China-USA) around trade and intellectual property; a recently published book; a wider trend for claims about robotic process automation (especially in relation to ‘offshoring‘); and a large investment fund predicated upon ‘disruption’.

“Merger” by Keiichi Matsuda – automation, work and ‘replacement’

A still from the 360-degree video "Merger" by Keiichi Matsuda
“With automation disrupting centuries-old industries, the professional must reshape and expand their service to add value. Failure is a mindset. It is those who empower themselves with technology who will thrive.
“Merger is a new film about the future of work, from cult director/designer Keiichi Matsuda (HYPER-REALITY). Set against the backdrop of AI-run corporations, a tele-operator finds herself caught between virtual and physical reality, human and machine. As she fights for her economic survival, she finds herself immersed in the cult of productivity, in search of the ultimate interface. This short film documents her last 4 minutes on earth.”

I came across the most recent film by Keichii Matsuda which concerns a possible future of work, with the protagonist embedded in an (aesthetically Microsoft-style) augmented reality of screen-surfaces, and in which the narrative denouement is a sort of trans-human ‘uploading’ moment.

I like Matsuda’s work. i think he skilfully and playfully provokes particular sorts of conversations, mostly about what we used to call ‘immersion’ and the nature of mediation. This has, predictably happened in terms of human vs. AI vs. eschatology (etc etc.) sorts of narratives in various outlets (e.g. the Verge). The first time I encountered his work was at a Passenger Films event at which Rob Kitchin talked about theorisations of mediation in relation to both Matsuda’s work and the (original) Disney film ‘Tron‘.

What is perhaps (briefly) interesting here are two things:

  1. The narrative is a provocative short story that asks us to reflect upon how our world of work and technological development get us from now (the status quo) to an apparent future state of affairs, which carries with it certain kinds of ethical, normative and political contentions. So, this is a story that piggybacks the growing narrative of ‘post-work’ or widespread automation of work by apparently ‘inhuman’ technologies (i.e. A.I) that provokes debate about the roles of ‘technology’ and ‘work’ and what it means to be ‘human’. Interestingly, this (arguably) places “Merger” in the genre of ‘fantasy’ rather than ‘science fiction’ – it is, after all, an eschatological story (I don’t see this final point as a negative). I suppose it could also be seen as a fictional suicide note but I’d rather not dwell on that…
  2. The depiction of the interface and the interaction with the technology-world of the protagonist– and indeed the depiction of these within a 360-degree video –are as important as the story to what the video is signifying. By which I mean – like the videos I called ‘vision videos’ back in 2009/10 (and (in some cases) might be called ‘design fiction’ or ‘diagetic prototypes’) – this video is also trying to show you and perhaps sell you the idea of a technology (Matsuda recently worked for Leap Motion). As I and others have argued – the more familiar audiences are with prospective/speculative technologies the more likely we are (perhaps) to sympathise with their funding/ production/ marketing and ultimately to adopt them.

Call for papers: Geography of/with A.I

Still from the video for All is Love by Bjork

I very much welcome any submissions to this call for papers for the proposed session for the RGS-IBG annual conference (in London in late-August) outlined below. I also welcome anyone getting in touch to talk about possible papers or ideas for other sorts of interventions – please do get in touch.

Call for papers:

We are variously being invited to believe that (mostly Global North, Western) societies are in the cusp, or early stages, of another industrial revolution led by “Artificial Intelligence” – as many popular books (e.g. Brynjolfsson and McAfee 2014) and reports from governments and management consultancies alike will attest (e.g. PWC 2018, UK POST 2016). The goal of this session is to bring together a discussion explicitly focusing on the ways in which geographers already study (with) ‘Artificial Intelligence’ and to, perhaps, outline ways in which we might contribute to wider debates concerning ‘AI’. 

There is widespread, inter-disciplinary analysis of ‘AI’ from a variety of perspective, from embedded systematic bias (Eubanks 2017, Noble 2018) to the kinds of under-examined rationales and work through which such systems emerge (e.g. Adam 1998, Collins 1993) and further to the sorts of ethical-moral frameworks that we should apply to such technologies (Gunkel 2012, Vallor 2016). In similar, if somewhat divergent ways, geographers have variously been interested in the kinds of (apparently) autonomous algorithms or sociotechnical systems are integrated into decision-making processes (e.g. Amoore 2013, Kwan 2016); encounters with apparently autonomous ‘bots’ (e.g. Cockayne et al. 2017); the integration of AI techniques into spatial analysis (e.g. Openshaw & Openshaw 1997); and the processing of ‘big’ data in order to discern things about, or control, people (e.g. Leszczynski 2015). These conversations appear, in conference proceedings and academic outputs, to rarely converge, nevertheless there are many ways in which geographical research does and can continue to contribute to these contemporary concerns.

The invitation of this session is to contribute papers that make explicit the ways in which geographers are (already) contributing to research on and with ‘AI’, to identify research questions that are (perhaps) uniquely geographical in relation to AI, and to thereby advance wider inter-disciplinary debates concerning ‘AI’.

Examples of topics might include (but are certainly not limited to):

  • A.I and governance
  • A.I and intimacy
  • Artificially intelligent mobilities
  • Autonomy, agency and the ethics of A.I
  • Autonomous weapons systems
  • Boosterism and ‘A.I’
  • Feminist and intersectional interventions in/with A.I
  • Gender, race and A.I
  • Labour, work and A.I
  • Machine learning and cognitive work
  • Playful A.I
  • Science fiction, spatial imaginations and A.I
  • Surveillance and A.I

Please send submissions (titles, abstracts (250 words) and author details) to: Sam Kinsley by 31st January 2019.

Bernard Stiegler on the on the notion of information and its limits

Bernard Stiegler being interviewed

I have only just seen this via the De Montfort Media and Communications Research Centre Twitter feed. The above video is Bernard Stiegler’s ‘key note’ (can’t have been a big conference?) at the University of Kent Centre for Critical Though conference on the politics of Simondon’s Modes of Existence of Technical Objects

In engaging with Simondon’s theory (or in his terms ‘notion’) of information, Stiegler reiterates some of the key elements of his Technics and Time in relation to exosomatisation and tertiary retention being the principal tendency of an originary technics that, in turn, has the character of a pharmakon, that, in more recent work, Stiegler articulates in relation to the contemporary epoch (the anthoropocene) as the (thermodynamic style) tension between entropy and negentropy. Stiegler’s argument is, I think, that Simondon misses this pharmacological character of information. In arguing this out, Stiegler riffs on some of the more recent elements of his project (the trilogy of ‘As’) – the anthropocene, attention and automation – which characterise the contemporary tendency towards proletarianisation, a loss of knowledge and capacities to remake the world.

It is interesting to see this weaving together of various elements of his project over the last twenty(+) years both: in relation to his engagement with Simondon’s work (a current minor trend in ‘big’ theory), and: in relation to what seems to me to be a moral philosophical character to Stiegler’s project, in terms of his diagnosis of the anthropocene and a call for a ‘neganthropocene’.

Sophia – show robots and deception

Hanson Robotics' "Sophia"

…when first we practice to deceive…

Walter Scott

Prof Noel Sharkey has written a thoughtful, informative and entertaining piece for Forbes (so, for a general audience) that does some unpacking of ‘Sophia’ with reference to the history of ‘show robots’ (such as the Westinghouse show robots of the the mid-C20, like Elektro, and of course Honda’s Asimo). It’s worth reading the piece in full but here’s a couple of choice clips:

Sophia is not the first show robot to attain celebrity status. Yet accusations of hype and deception have proliferated about the misrepresentation of AI to public and policymakers alike. In an AI-hungry world where decisions about the application of the technologies will impact significantly on our lives, Sophia’s creators may have crossed a line. What might the negative consequences be? To get answers, we need to place Sophia in the context of earlier show robots.


The tradition extends back to the automata precursors of robots in antiquity. Moving statues were used in the temples of ancient Egypt and Greece to create the illusion of a manifestation of the gods. Hidden puppeteers pulled ropes and spoke with powerful booming voices emitted from hidden tubes. This is not so different from how show robots like Sophia operate today to create the illusion of a manifestation of AI.

For me, the biggest problem with the hype surrounding Sophia is that we have entered a critical moment in the history of AI where informed decisions need to be made. AI is sweeping through the business world and being delegated decisions that impact significantly on peoples lives from mortgage and loan applications to job interviews, to prison sentences and bail guidance, to transport and delivery services to medicine and care.


It is vitally important that our governments and policymakers are strongly grounded in the reality of AI at this time and are not misled by hype, speculation, and fantasy. It is not clear how much the Hanson Robotics team are aware of the dangers that they are creating by appearing on international platforms with government ministers and policymakers in the audience.

Sing the body electric… robots in music videos

Still from the video for All is Love by Bjork

I recently saw the Chemical Brothers new-ish video for the song “Free Yourself”, featuring androids/robots apparently going feral and raving in a warehouse and it made me consciously think about something I’ve known for some time – there are quite a few music videos with ‘robots’ in them. 

So, here’s a very partial collection:

AI as organ -on|-ology

Kitt the 'intelligent' car from the TV show Knight Rider

Let’s begin with a postulate: there is either no “AI” – artificial intelligence – or every intelligence is, in fact, in some way artificial (following a recent talk by Bernard Stiegler). In doing so we commence from an observation that intelligence is not peculiar to one body, it is shared. A corollary is that there is no (‘human’) intelligence without artifice, insofar as we need to exteriorise thought (or what Stiegler refers to as ‘exosomatisation’) for that intelligence to function – as language, in writing and through tools – and that this is inherently collective. Further, we might argue that there is no AI. Taking that suggestion forward, we can say that there are, rather, forms of artificial (or functional) stupidity, following Alvesson & Spicer (2012: p. 1199), insofar as it inculcates forms of lack of capacity: “characterised by an unwillingness or inability to mobilize three aspects of cognitive capacity: reflexivity, justification, and substantive reasoning”. Following Alvesson & Spicer [& Stiegler] we might argue that such forms of stupidity are necessary passage points through our sense-making in/of the world, thus are not morally ‘wrong’ or ‘negative’. Instead, the forms of functional stupidity derive from technology/techniques are a form of pharmakon – both enabling and disabling in various registers of life.

Given such a postulate, we might categorise “AI” in particular ways. We might identify ‘AI’ not as ‘other’ to the ‘human’ but rather a part of our extended (exosomatic) capacities of reasoning and sense. This would be to think of AI ‘organologically’ (again following Stiegler) – as part of our widening, collective, ‘organs’ of action in the world. We might also identify ‘AI’ as an organising rationale in and of itself – a kind of ‘organon’ (following Aristotle). In this sense “AI” (the discipline, institutions and the outcome of their work [‘an AI’]) is/are an organisational framework for certain kinds of action, through particular forms of reasoning.

It would be tempting (in geographyland and across particular bits of the social sciences) to frame all of this stemming from, or in terms of, an individual figure: ‘the subject’. In such an account, technology (AI) is a supplement that ‘the human subject’ precedes. ‘The subject’, in such an account, is the entity to which things get done by AI, but also the entity ultimately capable of action. Likewise, such an account might figure ‘the subject’ and it’s ‘other’ (AI) in terms of moral agency/patiency. However, for this postulate such a framing would be unhelpful (I would also add that thinking in terms of ‘affect’, especially through neuro-talk would be just as unhelpful). If we think about AI organologically then we are prompted to think about the relation between what is figured as ‘the human’ and ‘AI’ (and the various other things that might be of concern in such a story) as ‘parasitic’ (in Derrida’s sense) – its a reciprocal (and, in Stiegler’s terms, ‘pharmacological’) relation with no a priori preceding entity. ‘Intelligence’ (and ‘stupidity’ too, of course) in such a formulation proceeds from various capacities for action/inaction.

If we don’t/shouldn’t think about Artificial Intelligence through the lens of the (‘sovereign’) individual ‘subject’ then we might look for other frames of reference. I think there are three recent articles/blogposts that may be instructive.

First, here’s David Runciman in the LRB:

Corporations are another form of artificial thinking machine, in that they are designed to be capable of taking decisions for themselves. Information goes in and decisions come out that cannot be reduced to the input of individual human beings. The corporation speaks and acts for itself. Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years.

Second, here’s Jonnie Penn riffing on Runciman in The Economist:

To reckon with this legacy of violence, the politics of corporate and computational agency must contend with profound questions arising from scholarship on race, gender, sexuality and colonialism, among other areas of identity.
A central promise of AI is that it enables large-scale automated categorisation. Machine learning, for instance, can be used to tell a cancerous mole from a benign one. This “promise” becomes a menace when directed at the complexities of everyday life. Careless labels can oppress and do harm when they assert false authority. 

Finally, here’s (the outstanding) Lucy Suchman discussing the ways in which figuring complex systems of ‘AI’-based categorisations as somehow exceeding our understanding does particular forms of political work that need questioning and resisting:

The invocation of Project Maven in this context is symptomatic of a wider problem, in other words. Raising alarm over the advent of machine superintelligence serves the self-serving purpose of reasserting AI’s promise, while redirecting the debate away from closer examination of more immediate and mundane problems in automation and algorithmic decision systems. The spectacle of the superintelligentsia at war with each other distracts us from the increasing entrenchment of digital infrastructures built out of unaccountable practices of classification, categorization, and profiling. The greatest danger (albeit one differentially distributed across populations) is that of injurious prejudice, intensified by ongoing processes of automation. Not worrying about superintelligence, in other words, doesn’t mean that there’s nothing about which we need to worry.
As critiques of the reliance on data analytics in military operations are joined by revelations of data bias in domestic systems, it is clear that present dangers call for immediate intervention in the governance of current technologies, rather than further debate over speculative futures. The admission by AI developers that so-called machine learning algorithms evade human understanding is taken to suggest the advent of forms of intelligence superior to the human. But an alternative explanation is that these are elaborations of pattern analysis based not on significance in the human sense, but on computationally-detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the black box’s operations. And it is humans who must take the responsibility for the cultural assumptions, and political and economic interests, on which those operations are based and for the life-and-death consequences that already follow.

All of these quotes more-or-less exhibit my version of what an ‘organological’ take on AI might look like. Likewise, they illustrate the ways in which we might bring to bear a form of analysis that seeks to understand ‘intelligence’ as having ‘supidity’as a necessary component (it’s a pharmkon, see?), which in turn can be functional (following Alvesson & Spicer). In this sense, the framing of ‘the corporation’ from Runciman and Penn is instructive – AI qua corporation (as a thing, as a collective endeavour [a ‘discipline’]) has ‘parasitical’ organising principles through which play out the pharmacological tendencies of intelligence-stupidity.

I suspect this would also resonate strongly with Feminist Technology Studies approaches (following Judy Wajcman in particular) to thinking about contemporary technologies. An organological approach situates the knowledges that go towards and result from such an understanding of intelligence-stupidity. Likewise, to resist figuring ‘intelligence’ foremost in terms of the sovereign and universal ‘subject’ also resists the elision of difference. An organological approach as put forward here can (perhaps should[?]) also be intersectional.

That’s as far as I’ve got in my thinking-aloud, I welcome any responses/suggestions and may return to this again.

If you’d like to read more on how this sort of argument might play out in terms of ‘agency’ I blogged a little while ago.

ADD. If this sounds a little like the ‘extended mind‘ (of Clark & Chalmers) or various forms of ‘extended self’ theory then it sort of is. What’s different is the starting assumptions: here, we’re not assuming a given (a priori) ‘mind’ or ‘self’. In Stiegler’s formulation the ‘internal’ isn’t realised til the external is apprehended: mental interior is only recognised as such with the advent of the technical exterior. This is the aporia of origin of ‘the human’ that Stiegler and Derrida diagnose, and that gives rise to the theory of ‘originary technics’. The interior and exterior, and with them the contemporary understanding of the experience of being ‘human’ and what we understand to be technology, are mutually co-constituted – and continue to be so [more here]. I choose to render this in a more-or-less epistemological, rather than ontological manner – I am not so much interested in the categorisation of ‘what there is’, rather in ‘how we know’.

Popular automative imagination (some novels)

Twiki the robot from Buck Rogers

I’ve had about six months of reading various versions of speculative/science fiction after not having read in that genre for a little while… so here’s a selection of books I’ve read (almost exclusively on an ereader) that have more-or-less been selected following the ‘people who read [a] also read [b]’ lists.

I’m not sure these books necessarily offer any novel insights but they do respond to the current milieu of imagining automation (AI, big data, platform-ing, robots, surveillance capitalism etc etc) and in that sense are a sort of very partial (and weird) guide to that imagination and the sorts of visions being promulgated.

I’d like to write more but I don’t have the time or energy so this is more or less a place-holder for trying to say something more interesting at a later date… I do welcome other suggestions though! Especially less conventionally Western ones.

ADD. Jennie Day kindly shared a recent blogpost by David Murakami Wood in which he makes some recommendations for SF books. Some of these may be of interest if you’re looking for wider recommendations. In particular, I agree with his recommendations of Okorafor’s “Lagoon“, which is a great novel.