AI as organ -on|-ology

Kitt the 'intelligent' car from the TV show Knight Rider

Let’s begin with a postulate: there is either no “AI” – artificial intelligence – or every intelligence is, in fact, in some way artificial (following a recent talk by Bernard Stiegler). In doing so we commence from an observation that intelligence is not peculiar to one body, it is shared. A corollary is that there is no (‘human’) intelligence without artifice, insofar as we need to exteriorise thought (or what Stiegler refers to as ‘exosomatisation’) for that intelligence to function – as language, in writing and through tools – and that this is inherently collective. Further, we might argue that there is no AI. Taking that suggestion forward, we can say that there are, rather, forms of artificial (or functional) stupidity, following Alvesson & Spicer (2012: p. 1199), insofar as it inculcates forms of lack of capacity: “characterised by an unwillingness or inability to mobilize three aspects of cognitive capacity: reflexivity, justification, and substantive reasoning”. Following Alvesson & Spicer [& Stiegler] we might argue that such forms of stupidity are necessary passage points through our sense-making in/of the world, thus are not morally ‘wrong’ or ‘negative’. Instead, the forms of functional stupidity derive from technology/techniques are a form of pharmakon – both enabling and disabling in various registers of life.

Given such a postulate, we might categorise “AI” in particular ways. We might identify ‘AI’ not as ‘other’ to the ‘human’ but rather a part of our extended (exosomatic) capacities of reasoning and sense. This would be to think of AI ‘organologically’ (again following Stiegler) – as part of our widening, collective, ‘organs’ of action in the world. We might also identify ‘AI’ as an organising rationale in and of itself – a kind of ‘organon’ (following Aristotle). In this sense “AI” (the discipline, institutions and the outcome of their work [‘an AI’]) is/are an organisational framework for certain kinds of action, through particular forms of reasoning.

It would be tempting (in geographyland and across particular bits of the social sciences) to frame all of this stemming from, or in terms of, an individual figure: ‘the subject’. In such an account, technology (AI) is a supplement that ‘the human subject’ precedes. ‘The subject’, in such an account, is the entity to which things get done by AI, but also the entity ultimately capable of action. Likewise, such an account might figure ‘the subject’ and it’s ‘other’ (AI) in terms of moral agency/patiency. However, for this postulate such a framing would be unhelpful (I would also add that thinking in terms of ‘affect’, especially through neuro-talk would be just as unhelpful). If we think about AI organologically then we are prompted to think about the relation between what is figured as ‘the human’ and ‘AI’ (and the various other things that might be of concern in such a story) as ‘parasitic’ (in Derrida’s sense) – its a reciprocal (and, in Stiegler’s terms, ‘pharmacological’) relation with no a priori preceding entity. ‘Intelligence’ (and ‘stupidity’ too, of course) in such a formulation proceeds from various capacities for action/inaction.

If we don’t/shouldn’t think about Artificial Intelligence through the lens of the (‘sovereign’) individual ‘subject’ then we might look for other frames of reference. I think there are three recent articles/blogposts that may be instructive.

First, here’s David Runciman in the LRB:

Corporations are another form of artificial thinking machine, in that they are designed to be capable of taking decisions for themselves. Information goes in and decisions come out that cannot be reduced to the input of individual human beings. The corporation speaks and acts for itself. Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years.

Second, here’s Jonnie Penn riffing on Runciman in The Economist:

To reckon with this legacy of violence, the politics of corporate and computational agency must contend with profound questions arising from scholarship on race, gender, sexuality and colonialism, among other areas of identity.
A central promise of AI is that it enables large-scale automated categorisation. Machine learning, for instance, can be used to tell a cancerous mole from a benign one. This “promise” becomes a menace when directed at the complexities of everyday life. Careless labels can oppress and do harm when they assert false authority. 

Finally, here’s (the outstanding) Lucy Suchman discussing the ways in which figuring complex systems of ‘AI’-based categorisations as somehow exceeding our understanding does particular forms of political work that need questioning and resisting:

The invocation of Project Maven in this context is symptomatic of a wider problem, in other words. Raising alarm over the advent of machine superintelligence serves the self-serving purpose of reasserting AI’s promise, while redirecting the debate away from closer examination of more immediate and mundane problems in automation and algorithmic decision systems. The spectacle of the superintelligentsia at war with each other distracts us from the increasing entrenchment of digital infrastructures built out of unaccountable practices of classification, categorization, and profiling. The greatest danger (albeit one differentially distributed across populations) is that of injurious prejudice, intensified by ongoing processes of automation. Not worrying about superintelligence, in other words, doesn’t mean that there’s nothing about which we need to worry.
As critiques of the reliance on data analytics in military operations are joined by revelations of data bias in domestic systems, it is clear that present dangers call for immediate intervention in the governance of current technologies, rather than further debate over speculative futures. The admission by AI developers that so-called machine learning algorithms evade human understanding is taken to suggest the advent of forms of intelligence superior to the human. But an alternative explanation is that these are elaborations of pattern analysis based not on significance in the human sense, but on computationally-detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the black box’s operations. And it is humans who must take the responsibility for the cultural assumptions, and political and economic interests, on which those operations are based and for the life-and-death consequences that already follow.

All of these quotes more-or-less exhibit my version of what an ‘organological’ take on AI might look like. Likewise, they illustrate the ways in which we might bring to bear a form of analysis that seeks to understand ‘intelligence’ as having ‘supidity’as a necessary component (it’s a pharmkon, see?), which in turn can be functional (following Alvesson & Spicer). In this sense, the framing of ‘the corporation’ from Runciman and Penn is instructive – AI qua corporation (as a thing, as a collective endeavour [a ‘discipline’]) has ‘parasitical’ organising principles through which play out the pharmacological tendencies of intelligence-stupidity.

I suspect this would also resonate strongly with Feminist Technology Studies approaches (following Judy Wajcman in particular) to thinking about contemporary technologies. An organological approach situates the knowledges that go towards and result from such an understanding of intelligence-stupidity. Likewise, to resist figuring ‘intelligence’ foremost in terms of the sovereign and universal ‘subject’ also resists the elision of difference. An organological approach as put forward here can (perhaps should[?]) also be intersectional.

That’s as far as I’ve got in my thinking-aloud, I welcome any responses/suggestions and may return to this again.

If you’d like to read more on how this sort of argument might play out in terms of ‘agency’ I blogged a little while ago.

ADD. If this sounds a little like the ‘extended mind‘ (of Clark & Chalmers) or various forms of ‘extended self’ theory then it sort of is. What’s different is the starting assumptions: here, we’re not assuming a given (a priori) ‘mind’ or ‘self’. In Stiegler’s formulation the ‘internal’ isn’t realised til the external is apprehended: mental interior is only recognised as such with the advent of the technical exterior. This is the aporia of origin of ‘the human’ that Stiegler and Derrida diagnose, and that gives rise to the theory of ‘originary technics’. The interior and exterior, and with them the contemporary understanding of the experience of being ‘human’ and what we understand to be technology, are mutually co-constituted – and continue to be so [more here]. I choose to render this in a more-or-less epistemological, rather than ontological manner – I am not so much interested in the categorisation of ‘what there is’, rather in ‘how we know’.

‘New geographies of automation?’ at the RGS-IBG conference

Industrial factory robot arms

All of a sudden the summer is nearly over, apparently, and the annual conference of the Royal Geographical Society with the Institute of British Geographers is fast approaching, this year in Cardiff.

I am convening a double session on the theme of ‘New geographies of automation?’, with two sessions of papers by some fantastic colleagues that promise to be really interesting. I am really pleased to have this opportunity to invite colleagues to collectively bring their work into conversation around a theme that is not only a contemporary topic in academic work but also, significantly, a renewed topic of interest in the wider public.

There are two halves of the session, broadly themed around ‘autonomy’ and ‘spacings’. Please find below the abstracts for the session.

Details: Sessions 92 & 123 (in slots 3 & 4 – 14:40-16:20 & 16:50-18:30) | Bates Building, Lecture Theatre 1.4

This information is also accessible, with all of the details of venue etc., on the RGS-IBG conference website: session 1 ‘autonomy’ and session 2 ‘spacings’.

New Geographies of Automation? (1): Autonomy

1.1 An Automative Imagination

Samuel Kinsley, University of Exeter

This paper sets out to review some of the key ways in which automation gets imagined – the sorts of cultural, economic and social forms of imagination that are drawn upon and generated when discussing how automation works and the kinds of future that may come as a result. The aim here is not to validate/invalidate particular narratives of automation – but instead to think about how they are produced and what they tell us about how we tell stories about what it means to be ‘human’, who/what has agency and what this may mean for how we think politically and spatially. To do this the concept of an ‘automative imagination’ is proposed as a means of articulating these different, sometimes competing – sometimes complementary, orientations towards automation.

 

1.2 The Future of Work: Feminist Geographical Engagements

Julie MacLeavy (Geographical Sciences, University of Bristol)

This paper considers the particular pertinence of feminist geographical scholarship to debates on the ‘future of work’. Drawing inspiration from Linda McDowell’s arguments that economic theories of epochal change rest on the problematic premise that economic and labour market changes are gender-neutral, it highlights the questions that are emerging from feminist economic geography research and commentary on the reorganisation of work, workers’ lives and labour markets. From this, the paper explores how feminist and anti-racist politics connect with the imagination of a ‘post-work’ world in which technological advancement is used to enable more equitable ways of practice (rather than more negative effects such as the intensification of work lifestyles). Political responses to the critical challenges that confront workers in the present moment of transformation are then examined, including calls for Universal Basic Income, which has the potential to reshape the landscape of labour-capital relations.

 

1.3 Narrating the relationship between automation and the changing geography of digital work

Daniel Cockayne, Geography and Environmental Management, University of Waterloo

Popular narratives about the relationship between automation and work often make a straightforward causal link between technological change and deskilling, job loss, or increased demand for jobs. Technological change – today, most commonly, automation and AI – is often scripted as threatening the integrity of labor, unionization, and traditional working practices or as creating more demand for jobs, in which the assumption is the more jobs the better. These narratives elide a close examination of the politics of work that include considerations of domestic and international racialized and gendered divisions of labor. Whether positive or negative, the supposed inevitability of technological transition positions labor as a passive victim of these changes, while diverting attention away from the workings of international financialized capital. Yet when juxtaposed against empirical data, straightforward cause and effect narratives become more complex. The unemployment rate in North America has been the lowest in 40 years (4.1% in the USA and 5.7% in Canada), which troubles the relationship between automation and job loss. Yet, though often touted by publications like The Economist as a marker of national economic well-being, unemployment rates ignore the kinds of work people are doing, effacing the qualitative changes in work practices over time. I examine these tropes and their relationship to qualitative changes in work practices, to argue that the link between technological change and the increasing precaratization of work is more primary than the diversionary relationship between technological change and job loss and gain or deskilling. 

 

1.4 Sensing automation

David Bissell, University of Melbourne

Processes of industrial automation are intensifying in many sectors of the economy through the development of AI and robotics. Conventional accounts of industrial automation stress the economic imperatives to increase economic profitability and safety. Yet such coherent snapped-to-grid understandings risk short-circuiting the complexity and richness of the very processes and events that compose automation. ­­­This paper draws from and reflects through a series of encounters with workers engaged in the increasingly automated mining sector in Australia. Rather than thinking these encounters solely through their representational dimensions with an aim to building a coherent image of what automation is, this paper is an attempt at writing how automation becomes differently disclosed through the aesthetic dimensions of encounters. It acknowledges how automation is always caught up in multiple affective and symbolic ecologies which create new depths of association. Developing post-phenomenological thought in cultural geography, this paper articulates some of the political and ethical stakes for admitting ambiguity, incoherence and confusion as qualities of our relations with technological change.

 

1.5 Technological Sovereignty, Post-Human Subjectivity, and the Production of the Digital-Urban Commons

Casey Lynch (School of Geography and Development, University of Arizona)

 As cities become increasingly monitored, planned, and controlled by the proliferation of digital technologies, urban geographers have sought to understand the role of software, big data, and connected infrastructures in producing urban space (French and Thrift 2002; Dodge, Kitchin, and Zook, 2009). Reflections on the “automatic production of space” have raised questions about the role and limitations of “human” agency in urban space (Rose 2017) and the possibilities for urban democracy. Yet, this literature largely considers the proliferation of digital infrastructures within the dominant capitalist, smart-city model, with few discussions of the possibilities for more radically democratic techno-urban projects. Engaging these debates, this paper considers alternative models of the techno-social production of urban space based around the collective production and management of a common digital-urban infrastructure. The paper reflects on the notion of “technological sovereignty” and the case of Guifinet, the world’s largest “community wireless network” covering much of Catalonia.  The paper highlights the way its decentralized, DIY mode of producing and maintaining digital urban infrastructure points to the possibilities for more radically democratic models of co-production in which urban space, technological infrastructures, and subjectivities are continually reshaped in relation. Through this, the paper seeks to contribute to broader discussions about the digitalization of urban space and the possibilities for a radical techno-politics.  

New Geographies of Automation? (2): Spacings

2.1 The urbanisation of robotics and automated systems – a research agenda
Andy Lockhart* (a.m.lockhart@sheffield.ac.uk), Aidan While* (a.h.while@sheffield.ac.uk), Simon Marvin (s.marvin@sheffield.ac.uk), Mateja Kovacic (m.kovacic@sheffield.ac.uk), Desiree Fields (d.fields@sheffield.ac.uk) and Rachel Macrorie (r.m.macrorie@sheffield.ac.uk) (Urban Institute, University of Sheffield)
*Attending authors
Pronouncements of a ‘fourth industrial revolution’ or ‘second machine age’ have stimulated significant public and academic interest in the implications of accelerating automation. The potential consequences for work and employment have dominated many debates, yet advances in robotics and automated systems (RAS) will have profound and geographically uneven ramifications far beyond the realm of labour. We argue that the urban is already being configured as a key site of application and experimentation with RAS technologies. This is unfolding across a range of domains, from the development of autonomous vehicles and robotic delivery systems, to the growing use of drone surveillance and predictive policing, to the rollout of novel assistive healthcare technologies and infrastructures. These processes and the logics underpinning them will significantly shape urban restructuring and new geographies of automation in the coming years. However, while there is growing research interest in particular domains, there remains little work to date which takes a more systemic view. In this paper we do three things, which look to address this gap and constitute the contours of a new urban research agenda. First, we sketch a synoptic view of the urbanisation of RAS, identifying what is new, what is being enabled as a result and what should concern critical scholars, policymakers and the wider public in debates about automation. Second, we map out the multiple and sometimes conflicting rationalities at play in the urbanisation of RAS, which have the potential to generate radically different urban futures, and may address or exacerbate existing socio-spatial inequalities and injustices. Third, and relatedly, we pose a series of questions for urban scholars and geographers, which constitute the basis for an urgent new programme of research and intervention.

 

2.2 Translating the signals: Utopia as a method for interrogating developments in autonomous mobility

Thomas Klinger1, 2
Brendan Doody2
Debbie Hopkins2
Tim Schwanen2
1. Institute of Human Geography, Goethe-University Frankfurt am Main
2. School of Geography and the Environment, University of Oxford

Connected and autonomous vehicles (CAVs) are often presented as technological ‘solutions’ to problems of road safety, congestion, fuel economy and the cost of transporting people, goods and services. In these dominant techno-economic narratives ‘non-technical’ factors such as public acceptance, legal and regulatory frameworks, cost and investment in testing, research and supporting infrastructure are the main ‘barriers’ to the otherwise steady roll-out of CAVs. Drawing on an empirical case study of traffic signalling, we trace the implications that advances in vehicle autonomy may have for such mundane and taken-for-granted infrastructure. We employ the three modes of analysis associated with Levitas’ (2013) ‘utopia as a method’. Starting with the architectural mode we identify the components, actors and visions underpinning ‘autonomobility’. The archaeological mode is then used to unpack the assumptions, contradictions and possible unintended effects that CAVs may have for societies. In the ontological mode we speculate upon the types of human and non-human subjectivities and agencies implied by alleged futures of autonomous mobility. Through this process we demonstrate that techno-economic accounts overemphasise the likely scale, benefits and impacts these advances may have for societies. In particular, they overlook how existing automobile-dependent mobility systems are the outcome of complex assemblages of social and technical elements (e.g., cars, car-drivers, roads, petroleum supplies, novel technologies and symbolic meanings) which have become interlinked in systemic and path-dependent ways over time. We conclude that utopia as method may provide one approach by which geographers can interrogate and opening up alarmist/boosterish visions of autonomobility and automation.

 

2.3 Automating the laboratory? Folding securities of malware
Andrew Dwyer, University of Oxford
andrew.dwyer@cybersecurity.ox.ac.uk

Folding, weaving, and stitching is crucial to contemporary analyses of malicious software; generated and maintained through the spaces of the malware analysis laboratory. Technologies entangle (past) human analysis, action, and decision into ‘static’ and ‘contextual’ detections that we depend on today. A large growth in suspect software to draw decisions on maliciousness have driven a movement into (seemingly omnipresent) machine learning. Yet this is not the first intermingling of human and technology in malware analysis. It draws on a history of automation, enabling interactions to ‘read’ code in stasis; build knowledges in more-than-human collectives; allow ‘play’ through a monitoring of behaviours in ‘sandboxed’ environments; and draw on big data to develop senses of heuristic reputation scoring.

Though we can draw on past automation to explore how security is folded, made known, rendered as something knowable: contemporary machine learning performs something different. Drawing on Louise Amoore’s recent work on the ethics of the algorithm, this paper queries how points of decision are now more-than-human. Automation has always extended the human, led to loops, and driven alternative ways of living. Yet the contours, the multiple dimensions of the neural net, produce the malware ‘unknown’ that have become the narrative of the endpoint industry. This paper offers a history of the automation of malware analysis from static and contextual detection, to ask how automation is changing how cyberspace becomes secured and made governable; and how automation is not something to be feared, but tempered with the opportunities and challenges of our current epoch.

 

2.4 Robots and resistance: more-than-human geographies of automation on UK dairy farms

Chris Bear (Cardiff University; bearck@cardiff.ac.uk)
Lewis Holloway (University of Hull; l.holloway@hull.ac.uk)

This paper examines the automation of milking on UK dairy farms to explore how resistance develops in emerging human-animal-technology relations. Agricultural mechanisation has long been celebrated for its potential to increase the efficiency of production. Automation is often characterised as continuing this trajectory; proponents point to the potential for greater accuracy, the removal of less appealing work, the reduction of risks posed by unreliable labour, and the removal of labour costs. However, agricultural mechanisation has never been received wholly uncritically; studies refer to practices of resistance that have developed due to fears around (for instance) impacts on rural employment, landscapes, ecologies and traditional knowledge practices. Drawing on interviews with farmers, observational work on farms and analysis of promotional material, this paper examines resistant relations that emerge around the introduction of Automated Milking Systems (AMS) on UK dairy farms. While much previous work on resistance to agricultural technologies has pitted humans against machines, we follow Foucault in arguing that resistance can be heterogeneous and directionally ambiguous, emerging through ‘the capillary processes of counter-conduct’ (Holloway and Morris 2012). These capillary processes can have complex geographies and emerge through more-than-human relations. Where similar conceptualisations have been developed previously, technologies continue to appear rather inert – they are often the tools by which humans attempt to exert influence, rather than things which can themselves ‘object’ (Latour 2000), or which are co-produced by other nonhumans rather than simply imposed or applied by humans. We begin, therefore, to develop a more holistic approach to the geographies of more-than-human resistance in the context of automation.

 

2.5 Fly-by-Wire: The Ironies of Automation and the Space-Times of Decision-Making

Sam Hind (University of Siegen; hind@locatingmedia.uni-siegen.de)

This paper presents a ‘prehistory’ (Hu 2015) of automobile automation, by focusing on ‘fly-by-wire’ control systems in aircraft. Fly-by-wire systems, commonly referred to as ‘autopilots’ work by translating human control gestures into component movements, via digital soft/hardware. These differ historically from mechanical systems in which pilots have direct steering control through a ‘yoke’ to the physical components of an aircraft (ailerons etc.), via metal rods or wires. Since the launch of the first commercial aircraft with fly-by-wire in 1988, questions regarding the ‘ironies’ or ‘paradoxes’ of automation (Bainbridge 1983) have continued to be posed. I look at the occurrence of ‘mode confusion’ in cockpits to tease out one of these paradoxes; using automation in the aviation industry as a heuristic lens to analyze automation of the automobile. I then proceed by detailing a scoping study undertaken at the Geneva Motor Show in March this year, in which Nissan showcased an autonomous vehicle system. Unlike other manufacturers, Nissan is pitching the need for remote human support when vehicles encounter unexpected situations; further complicating and re-distributing navigational labour in, and throughout, the driving-machine. I will argue that whilst such developments plan to radically alter the ‘space-times of decision-making’ (McCormack and Schwanen 2011) in the future autonomous vehicle, they also exhibit clear ironies or paradoxes found similarly, and still fiercely discussed, in the aviation industry and with regards to fly-by-wire systems. It is wise, therefore, to consider how these debates have played out – and with what consequences.

Janelle Monáe – Dirty Computer, an emotion picture

Still from Janelle Monáe's Dirty Computer video

I came across Janelle Monáe‘s work a while ago, through Twitter, I was really taken by the video for “Many Moons“, which is beautiful. Metropolis, the album from which it is taken, is a really interesting blend of pop, sci-fi and perhaps afrofuturism, or at least forms of sci-fi that don’t conform to, or queer, standard Western/Global North sci-fi themes/norms. Some have argued Monáe’s videos blends American and African Sci-Fi themes (a teaser trailer for Dirty Computer was shown before cinematic performances of Black Panther) in a sort of queer aesthetic (that’s my reading of what longer pieces say anyway) and I think I can see it in several videos, though my knowledge of other work that might complement or contrast this is very limited.

In the “emotion picture” (what a beautifully evocative term) for the album Dirty Computer we’re presented with a rich and confident, feature length, work of art. I’m not currently able to dedicate the time to offer a lengthier visual analysis, I’m simply going to post the video, below. All I can say, really, is: wow.

Seminar> Charis Thompson: On the Posthuman in the Age of Automation and Augmentation

Still from the video for All is Love by Bjork

If you happen to be in Exeter on Friday 11th May then I urge you to attend this really interesting talk by Prof. Charis Thompson (UC Berkeley), organised by Sociology & Philosophy at Exeter. Here’s the info:

Guest speaker – Professor Charis Thompson: On the Posthuman in the Age of Automation and Augmentation

A Department of Sociology & Philosophy lecture
Date 11 May 2018
Time 14:00 to 15:15
Place IAIS Building/LT1

Charis Thompson is Chancellor’s Professor, Gender and Women’s Studies and the Center for Science, Technology, Medicine and Society, UC Berkeley, and Professor, Department of Sociology, London School of Economics. She is the author of Making Parents; The Ontological Choreography of Reproductive Technologies (MIT Press 2007), which won the Rachel Carson Prize from the Society of the Social Studies of Science, and of Good Science: The Ethical Choreography of Stem Cell Research (MIT Press 2013). Her book in progress, Getting Ahead, revisits classic questions on the relation between science and democracy in an age of populism and inequality, focusing particularly on genome editing and AI.

She served on the Nuffield Council Working Group on Genome Editing, and serves on the World Economic Forum’s Global Technology Council on Technology, Values and Policy. Thompson is a recipient of UC Berkeley’s Social Science Distinguished Teaching Award.  In 2017, she was awarded an honorary doctorate from the National Science and Technology University of Norway for work on science and society.

SPA PGR Conference Committee
Maria Dede
Aimee Middlemiss
Celia Plender
Elena Sharratt

Retooling ‘the human’ – animal constructions and technical knowledge

A crow using a stick as a tool

A really interesting review, by Emma Stamm, of what seems like an equally interesting book: Ashley Shew’s Animal Constructions and Technical Knowledge. Full review on Social Epistemology Review and Reply Collective.

In its investigation of the flaws of anthropocentrism, Animal Constructions implies a deceptively straightforward question: what work does “the human clause” do for us? —  in other words, what has led “the human” to become so inexorably central to our technological and philosophical consciousness? Shew does not address this head-on, but she does give readers plenty of material to begin answering it for themselves. And perhaps they should: while the text resists ethical statements, there is an ethos to this particular question.

Applied at the societal level, an investigation of the roots of “the human clause” could be leveraged toward democratic ends. If we do, in  fact, include tools made and used by nonhuman animals in our definition of technology, it may mar the popular image of technological knowledge as a sort of “magic” or erudite specialization only accessible to certain types of minds. There is clear potential for this epistemological position to be advanced in the name of social inclusivity.

Whether or not readers detect a social project among the conversations engaged by Animal Constructions, its relevance to future studies is undeniable. The maps provided by Animal Constructions and Technical Knowledge do not tell readers where to go, but will certainly come in useful for anybody exploring the nonhuman territories of 21st century. Indeed, Animal Construction and Technical Knowledge is not only a substantive offering to philosophy of technology, but a set of tools whose true power may only be revealed in time.

Reblog> Call for papers: “Human-technology relations: postphenomenology and philosophy of technology”

Via Peter-Paul Verbeek.

Call for papers: “Human-technology relations: postphenomenology and philosophy of technology”

The international conference “Human-Technology Relations: Postphenomenology and Philosophy of Technology” will take place on July 11-13, 2018 at the University of Twente, the Netherlands. The conference intends to bring together philosophers, scholars, artists, designers, and engineers in order to foster dialogue and creative collaborations around the interactions between humans, technologies, and society. As has been emphasized by several authors (e.g. Ihde, Haraway, Feenberg, Vallor, Latour, Verbeek), we cannot uphold strict distinctions between humans and technologies. As human beings, we are always interwoven with technologies in our daily practices. This conference aims to reflect on the consequences of this idea in philosophy, ethics, science, sociology, design, art, politics, anthropology, engineering, etc.

We welcome abstract submissions for posters, papers, panels, and workshops:

* Papers – abstract 250 words, and up to three keywords [20 minutes talk, 10 minutes for discussion].

* Panel proposals – can consist of up to 4 abstracts (see above), should include a general description (200 words) and the CV of participants and organiser.

* Posters – abstract 250 words, and up to three keywords. (Master students only, there will be a poster competition) .

* Workshops – description of 250 words, specific requirements.

Abstracts, along with a short CV, should be sent to phtr2018@gmail.com.

For an extended call for papers and description of the conference, please, visit https://www.utwente.nl/en/phtr/Call%20for%20Papers/

The conference is organized and financially supported by The Netherlands Organisation for Scientific Research (NWO) Project Theorizing Technological Mediation.

‘An Encounter Between Don Ihde and Bernard Stiegler: Philosophy of Technology at the Crossroads Again’ Nijmegen 11-12 Jan. 2018

Glitched image of a mural of Prometheus giving humans' fire in Freiberg

Via Yuk Hui. Fascinating line-up of speakers:

International Expert Workshop: ‘An Encounter Between Don Ihde and Bernard Stiegler: Philosophy of Technology at the Crossroads Again’

Nijmegen, 11-12 January 2018Organized by Pieter Lemmens (Radboud University) and Yoni Van Den Eede (Free University of Brussels)

Quo vadis philosophy of technology? The defining moment in contemporary philosophy of technology has undoubtedly been the “empirical turn” of the 1990s and 2000s. Contra older, so-called essentialist approaches that saw technology as an all-encompassing phenomenon or force, the turn inaugurated more “micro-level” analyses of technologies studied in their specific use contexts. However, the empirical turn is now increasingly being called into question, with scholars asking whether the turn has not been pushed too far – certainly given recent technological developments that seem to give technology an all-encompassing or all-penetrating countenance (again): pervasive automation through algorithms and (ro)bots, nanoprobes, biotechnology, neurotechnology, etc. Also, the ecological urgency characterizing our “anthropocenic condition” appears to call for more broad-ranging perspectives than the mere analysis of concrete use contexts. At the same time, nevertheless, the “empirical attitude” keeps demonstrating its usefulness for the philosophical study of technologies on a day-to-day basis… Where do we go from here?

This two-day workshop will be dedicated to these questions by way of an encounter between two leading “streams” in philosophical thinking on technology today. Quite literally we organize an dialogue between two key figures, Don Ihde and Bernard Stiegler, and their respective frameworks: postphenomenology-mediation theory and techno-phenomenology-general organology. Together with these two thinkers and a select group of scholars, we will reflect upon the near-future form that philosophical thinking on technology should take in a world struggling with multiple global crises – the planetary ecological crisis being the gravest one.

Speakers: Prof. dr. Don Ihde (Stony Brook University), Prof. dr. Bernard Stiegler (University of Compiègne), Prof. dr. Mark Coeckelberg (University of Vienna), Dr. Yoni van den Eede (Free University of Brussels), Dr. Yuk Hui (Leuphana University), Dr. Pieter Lemmens (Radboud University), Dr. Helena de Preester (Ghent University), Prof. dr. Robert C. Scharff (University of New Hampshire), Dr. Dominic Smith (University of Dundee), Prof. dr. Peter-Paul Verbeek (University of Twente), Dr. Galit Wellner (Tel Aviv University)

Venue: Villa Oud Heyendael, Rene? Descartesdreef 21, 6525 GL Nijmegen (Campus RU)
Time: 11 and 12 January 2018, 10.00h – 17.30h

More information: http://www.ru.nl/ftr/actueel/agenda/@1134542/expert-seminar-philosophy-technology-at-the/

Made possible by the the Faculty of Science, the Institute for Science in Society, the Faculty of Philosophy, Theology and Religious Studies, the International Office of Radboud University and the Centre for Ethics and Humanism, Free University of Brussels (VUB)

Brian Cox, cyberpunk

Man with a colander on his head attached to electrodes

Doing public comms of science is hard, and it’s good to have people trying to make things accessible and good to excite and interest people about finding things out about the world… but it can tip over into being daft pretty easily.

Here’s the great D:ream-er Brian Cox going all cyberpunk on brain/mind uploads… (note the lad raising his eyes to the ceiling at 0:44 🙂 )

This made me wonder how Hubert Dreyfus would attempt to dispel the d:ream (don’t all groan at once!) as the ‘simulation of brains/minds’ is precisely the version of AI that Dreyfus was critiquing in the 1970s. If you’re interested in further discussion of ‘mind uploading’, and not my flippant remarks, see John Danaher’s writing on this on his excellent blog.

Reblog> Angela Walch on the misunderstandings of blockchain technology

Blockchain visualisation

Another excellent, recent, episode of John Danaher’s podcast. In a wide-ranging discussion of blockchain technologies with Angela Walch there’s lots of really useful explorations of some of the confusing (to me anyway) aspects of what is meant by ‘blockchain’.

Episode #28 – Walch on the Misunderstandings of Blockchain Technology

In this episode I am joined by Angela Walch. Angela is an Associate Professor at St. Mary’s University School of Law. Her research focuses on money and the law, blockchain technologies, governance of emerging technologies and financial stability. She is a Research Fellow of the Centre for Blockchain Technologies of University College London. Angela was nominated for “Blockchain Person of the Year” for 2016 by Crypto Coins News for her work on the governance of blockchain technologies. She joins me for a conversation about the misleading terms used to describe blockchain technologies.

You can download the episode here. You can also subscribe on iTunes or Stitcher.

Show Notes

  • 0:00 – Introduction
  • 2:06 – What is a blockchain?
  • 6:15 – Is the blockchain distributed or shared?
  • 7:57 – What’s the difference between a public and private blockchain?
  • 11:20 – What’s the relationship between blockchains and currencies?
  • 18:43 – What is miner? What’s the difference between a full node and a partial node?
  • 22:25 – Why is there so much confusion associated with blockchains?
  • 29:50 – Should we regulate blockchain technologies?
  • 36:00 – The problems of inconsistency and perverse innovation
  • 41:40 – Why blockchains are not ‘immutable’
  • 58:04 – Why blockchains are not ‘trustless’
  • 1:00:00 – Definitional problems in practice
  • 1:02:37 – What is to be done about the problem?

Relevant Links

Reblog> Ordinary digital humanities – event with @rodgers_scott

Another interesting event, this time with Scott Rodgers at Birkbeck.

Ordinary digital humanities: Free event at Birkbeck, 15 May 2017

Apple_II_IMG_4212In a couple of weeks’ time I am happy to be hosting an event as part of Birkbeck Arts Week on the subject of ‘Ordinary Digital Humanities’, featuring a talk from Lesley Gourlay (UCL Institute of Education). The publicity blurb below has more than enough information, I suspect, for you to get the idea. The event comes in significant part out of discussions we’ve been having through Satellite, an experimental subcommittee in the School of Arts at Birkbeck focused on exchanging information and perspectives on the critical, creative, academic and pedagogical dimensions of learning technologies.

Ordinary Digital Humanities
The everyday life of digital technologies in pedagogy

15 May 2017, 6pm, Keynes Library, 43 Gordon Square, WC1H 0PD (part of Birkbeck Arts Week)

Book your free place at: https://www.eventbrite.co.uk/e/ordinary-digital-humanities-the-everyday-life-of-digital-technologies-tickets-33211361075

When one hears the term ‘digital humanities’ what likely comes to mind are innovative applications of computational techniques and technologies to humanities research as well as pedagogy. Or, conversely, the application of concepts and philosophies rooted in the humanities toward the study of emergent digital objects.

There are reasons however to think about the digital humanities in a more ordinary sense, beyond ‘cutting edge’ research or pedagogy. In the introduction to her book How We Think, N. Katherine Hayles suggests that in order to understand the implications of digital technologies for the humanities, we must first consider the ‘low level’ implications of digital technologies for academic life. For instance, the widespread normalisation of email, department websites, software applications, web searches, social media, and much more. For Hayles, a ‘digital humanities’ is more than just innovative new approaches to research and pedagogy. It is also inherently about transformations to the cognitive and embodied environments of academic life at the level of the habitual or everyday.

This forum considers the banal dimensions of the digital humanities, focusing specifically (though not exclusively) on pedagogical practice. It begins with an opening lecture from Dr Lesley Gourlay (abstract below), followed by responses from Dr Grace Halden and Dr Tim Markham from Birkbeck, University of London. We will not only survey the everyday ways digital technologies are being used, and are asserting themselves, in academic life; but also, how humanities and other scholars might respond to the apparent opportunities and intrusions wrought by digitisation.

The event will be followed by a wine reception.

Flickering texts and the writing body: posthuman perspectives on the digital university

Dr Lesley Gourlay, Reader in Education and Technology, UCL Institute of Education

As digital technologies become increasingly central in our day-to-day lives, we see profound changes in how we search for information, communicate with others, and express ourselves. The university is also changing, as digital media becomes more central via online resources for distance learning, but also throughout the traditional campus, through the mainstream use of digital media for teaching, library provision and academic reading and writing. This has not only changed approaches to academic teaching, it has also fundamentally altered how we seek resources and how we read and generate new knowledge through writing and interaction. Drawing Hayle’s notion of ’embodied virtuality’, this talk will explore this theme, analysing data from a research project which looked in detail at how a small group of students approached their studies over a six-month period, using drawings, photography and interviews to form an in-depth understanding of their day-to-day lives as students, and how they navigate the complex digital and material landscape of the contemporary university. It looks at how texts change when they move between paper-based and the digital, how students navigate the complexities of multiple devices and formats, how they make spaces for writing and knowledge while on the move, and how they form part of ‘posthuman’ networks of digital devices and material objects in order to engage in their studies. Taking a historical perspective, the talk concludes that textual practices have always been central to how higher education is organised and how we understand ‘knowledge’, and considers how digital media might continue to change what we call ‘the university’.