In a recent article in Critical Inquiry, N. Katherine Hayles formulates an understanding of particular kinds of technological support as ‘cognitive assemblages’ (sort of following Deleuze & Guattari*). She takes as her particular case study the advent of swarming, quasi-autonomous UAVs/drones and their use in warfare. The final paragraph of the piece is interesting but, for me, raises as many questions as it seeks to answer:
Language, human sociality, somatic responses, and technological adaptations, along with emotion, are crucial to the formation of modern humans. Whether warfare should be added to the list may be controversial, but the twentieth and twenty-first centuries suggest that it will persist, albeit in modified forms. As the informational networks and feedback loops connecting us and our devices proliferate and deepen, we can no longer afford the illusion that consciousness alone steers our ships. How should we reimagine contemporary cognitive ecologies so that they become life enhancing rather than aimed toward dysfunctionality and death for humans and nonhumans alike? Recognizing the role played by non conscious cognitions in human/technical hybrids and conceptualizing them as cognitive assemblages is of course not a complete answer, but it is a necessary component. We need to recognize that when we design, implement, and extend technical cognitive systems, we are partially designing ourselves. We must take care accordingly. More accurate and encompassing views of how our cognitions enmesh with technical systems and those of other life forms will enable better designs, humbler perceptions of human roles in planetary cognitive ecologies, and more life-affirming practices as we move toward a future in which technical agency and autonomy become increasingly intrinsic to human complex systems.
(Hayles, 2016: 55)
A couple of immediate questions pop out for me:
- Could you get designers of UAVs to factor in affective responses without reducing them to something like galvanic skin response or another quantifiable measure (which Hayles critiques in relation to MIT Prof. Sandy Pentland’s proselytisation of a ‘sociometer‘)?
- What is meant by “accuracy” in that final sentence?! How might we qualify the idea of “better” designs? This seems to assert a kind of ethics (and maybe aesthetics) antithetical to the institutions and companies that make military equipment – by not addressing this, there seems to me to be a risk of simply making naive assertions.
I appreciate Hayles’ attempt to harness a broadly Deleuzian understanding of cognition (which might be understood as “affect theory”) in attending to pressing contemporary issues such as the rise of “killer robots” (or quasi-autonomous technological platforms that can inflict death), however – it seems to me that the paper uses the case studies (taken from other researchers’ work, such as Chamayou) to validate the theory, rather than using the theory to critically interrogate the empirical state of affairs. This, it seems to me, is a shame (not least because there’s a fruitful application of aspects of How we became posthuman here), and, as observed above, leaves more questions than answers. Maybe that’s productive, it can open debates – but others are doing slightly more to qualify how we might problematise ethics in this arena. I’d recommend taking a look at Lucy Suchman’s work, especially “robot futures” and the Campaign to Stop Killer Robots…
Addendum: I’m not suggesting that Chamayou and other ‘droners’ are “right” and Hayles is somehow wrong… I’d definitely agree with Prof Louise Amoore who suggested on Twitter that those folk could do with reading Hayles’ (and Suchman’s) work
* I’m uncertain about the proposition of ‘cognitive assemblages’ – if we were to follow D&G’s theory of agencements would not all ‘assemblages’ be cognitive? The implication is, it seems, that the ‘cognitive’ in Hayles’ formulation is human cognition – which implies a human exceptionalism that might be seen as antithetical to D&G’s philosophy.
One Reply to “N. Katherine Hayles on UAVs/drones as ‘cognitive assemblages’”
The Hayles text is a great read for the discussion of the egs of cognitive assemblage she puts forward — the LA traffic control system, the VIV chatbot development and the sociometer. These provide flesh on the bones of the notion of cognitive assemblage and raise many questions and themes that can arise out of considering technological changes from this pov. So as a drone researcher I was looking forward to the final section on drone systems thought through this optic. The last section does not really press ahead with that however and tends to pull back to a more ethical/political frame. And so while the final points (against Ronald Arkin’s voluntarist affirmation of a military-technoscientific rationale for pushing ahead with the lethal autonomous system agenda) are fair enough (moral ‘intelligence’ and affect are actually valuable and even ‘vital’ elements in developing weapons systems), the kinds of questions you raise in trying to address automatic swarms and systems development indicate the absence of this pursuit of the cognitive assemblage mode of approach where it was most anticipated in the piece. To this extent I think Chamayou, Derek Gregory and others do more towards this kind of analysis (and I agree that Chamayou in particular is not accommodated or engaged with in a manner reflecting the substance or relevance of his work to this theme).