A genealogy of theorising information technology, through Simondon [video]

Glitched image of a mural of Prometheus giving humans' fire in Freiberg

This post follows from the video of Bernard Stiegler talking about Simondon’s ‘notion’ of information, in relation to his reading of Simondon and others’ theorisation of technogenesis. That paper was a key note in the conference ‘Culture & Technics: The Politics of Du Mode‘, held by the University of Kent’s Centre for Critical Though. It is worth highlighting the whole conference is available on YouTube.

In particular, the panel session with Anne Sauvagnargues and Yuk Hui discussing the genealogy of Simondon’s thought (as articulated in his two perhaps best-known books). For those interested in (more-or-less) French philosophies of technology (largely in the 20th century) this is a fascinating and actually quite accessible discussion.

Sauvagnargues discusses the historical and institutional climate/context of Simondon’s work and Yuk excavates (in a sort of archeological manner) some of the key assumptions and intellectual histories of Simondon’s theorisation of individuation, information and technics.

Bernard Stiegler on the on the notion of information and its limits

Bernard Stiegler being interviewed

I have only just seen this via the De Montfort Media and Communications Research Centre Twitter feed. The above video is Bernard Stiegler’s ‘key note’ (can’t have been a big conference?) at the University of Kent Centre for Critical Though conference on the politics of Simondon’s Modes of Existence of Technical Objects

In engaging with Simondon’s theory (or in his terms ‘notion’) of information, Stiegler reiterates some of the key elements of his Technics and Time in relation to exosomatisation and tertiary retention being the principal tendency of an originary technics that, in turn, has the character of a pharmakon, that, in more recent work, Stiegler articulates in relation to the contemporary epoch (the anthoropocene) as the (thermodynamic style) tension between entropy and negentropy. Stiegler’s argument is, I think, that Simondon misses this pharmacological character of information. In arguing this out, Stiegler riffs on some of the more recent elements of his project (the trilogy of ‘As’) – the anthropocene, attention and automation – which characterise the contemporary tendency towards proletarianisation, a loss of knowledge and capacities to remake the world.

It is interesting to see this weaving together of various elements of his project over the last twenty(+) years both: in relation to his engagement with Simondon’s work (a current minor trend in ‘big’ theory), and: in relation to what seems to me to be a moral philosophical character to Stiegler’s project, in terms of his diagnosis of the anthropocene and a call for a ‘neganthropocene’.

AI as organ -on|-ology

Kitt the 'intelligent' car from the TV show Knight Rider

Let’s begin with a postulate: there is either no “AI” – artificial intelligence – or every intelligence is, in fact, in some way artificial (following a recent talk by Bernard Stiegler). In doing so we commence from an observation that intelligence is not peculiar to one body, it is shared. A corollary is that there is no (‘human’) intelligence without artifice, insofar as we need to exteriorise thought (or what Stiegler refers to as ‘exosomatisation’) for that intelligence to function – as language, in writing and through tools – and that this is inherently collective. Further, we might argue that there is no AI. Taking that suggestion forward, we can say that there are, rather, forms of artificial (or functional) stupidity, following Alvesson & Spicer (2012: p. 1199), insofar as it inculcates forms of lack of capacity: “characterised by an unwillingness or inability to mobilize three aspects of cognitive capacity: reflexivity, justification, and substantive reasoning”. Following Alvesson & Spicer [& Stiegler] we might argue that such forms of stupidity are necessary passage points through our sense-making in/of the world, thus are not morally ‘wrong’ or ‘negative’. Instead, the forms of functional stupidity derive from technology/techniques are a form of pharmakon – both enabling and disabling in various registers of life.

Given such a postulate, we might categorise “AI” in particular ways. We might identify ‘AI’ not as ‘other’ to the ‘human’ but rather a part of our extended (exosomatic) capacities of reasoning and sense. This would be to think of AI ‘organologically’ (again following Stiegler) – as part of our widening, collective, ‘organs’ of action in the world. We might also identify ‘AI’ as an organising rationale in and of itself – a kind of ‘organon’ (following Aristotle). In this sense “AI” (the discipline, institutions and the outcome of their work [‘an AI’]) is/are an organisational framework for certain kinds of action, through particular forms of reasoning.

It would be tempting (in geographyland and across particular bits of the social sciences) to frame all of this stemming from, or in terms of, an individual figure: ‘the subject’. In such an account, technology (AI) is a supplement that ‘the human subject’ precedes. ‘The subject’, in such an account, is the entity to which things get done by AI, but also the entity ultimately capable of action. Likewise, such an account might figure ‘the subject’ and it’s ‘other’ (AI) in terms of moral agency/patiency. However, for this postulate such a framing would be unhelpful (I would also add that thinking in terms of ‘affect’, especially through neuro-talk would be just as unhelpful). If we think about AI organologically then we are prompted to think about the relation between what is figured as ‘the human’ and ‘AI’ (and the various other things that might be of concern in such a story) as ‘parasitic’ (in Derrida’s sense) – its a reciprocal (and, in Stiegler’s terms, ‘pharmacological’) relation with no a priori preceding entity. ‘Intelligence’ (and ‘stupidity’ too, of course) in such a formulation proceeds from various capacities for action/inaction.

If we don’t/shouldn’t think about Artificial Intelligence through the lens of the (‘sovereign’) individual ‘subject’ then we might look for other frames of reference. I think there are three recent articles/blogposts that may be instructive.

First, here’s David Runciman in the LRB:

Corporations are another form of artificial thinking machine, in that they are designed to be capable of taking decisions for themselves. Information goes in and decisions come out that cannot be reduced to the input of individual human beings. The corporation speaks and acts for itself. Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years.

Second, here’s Jonnie Penn riffing on Runciman in The Economist:

To reckon with this legacy of violence, the politics of corporate and computational agency must contend with profound questions arising from scholarship on race, gender, sexuality and colonialism, among other areas of identity.
A central promise of AI is that it enables large-scale automated categorisation. Machine learning, for instance, can be used to tell a cancerous mole from a benign one. This “promise” becomes a menace when directed at the complexities of everyday life. Careless labels can oppress and do harm when they assert false authority. 

Finally, here’s (the outstanding) Lucy Suchman discussing the ways in which figuring complex systems of ‘AI’-based categorisations as somehow exceeding our understanding does particular forms of political work that need questioning and resisting:

The invocation of Project Maven in this context is symptomatic of a wider problem, in other words. Raising alarm over the advent of machine superintelligence serves the self-serving purpose of reasserting AI’s promise, while redirecting the debate away from closer examination of more immediate and mundane problems in automation and algorithmic decision systems. The spectacle of the superintelligentsia at war with each other distracts us from the increasing entrenchment of digital infrastructures built out of unaccountable practices of classification, categorization, and profiling. The greatest danger (albeit one differentially distributed across populations) is that of injurious prejudice, intensified by ongoing processes of automation. Not worrying about superintelligence, in other words, doesn’t mean that there’s nothing about which we need to worry.
As critiques of the reliance on data analytics in military operations are joined by revelations of data bias in domestic systems, it is clear that present dangers call for immediate intervention in the governance of current technologies, rather than further debate over speculative futures. The admission by AI developers that so-called machine learning algorithms evade human understanding is taken to suggest the advent of forms of intelligence superior to the human. But an alternative explanation is that these are elaborations of pattern analysis based not on significance in the human sense, but on computationally-detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the black box’s operations. And it is humans who must take the responsibility for the cultural assumptions, and political and economic interests, on which those operations are based and for the life-and-death consequences that already follow.

All of these quotes more-or-less exhibit my version of what an ‘organological’ take on AI might look like. Likewise, they illustrate the ways in which we might bring to bear a form of analysis that seeks to understand ‘intelligence’ as having ‘supidity’as a necessary component (it’s a pharmkon, see?), which in turn can be functional (following Alvesson & Spicer). In this sense, the framing of ‘the corporation’ from Runciman and Penn is instructive – AI qua corporation (as a thing, as a collective endeavour [a ‘discipline’]) has ‘parasitical’ organising principles through which play out the pharmacological tendencies of intelligence-stupidity.

I suspect this would also resonate strongly with Feminist Technology Studies approaches (following Judy Wajcman in particular) to thinking about contemporary technologies. An organological approach situates the knowledges that go towards and result from such an understanding of intelligence-stupidity. Likewise, to resist figuring ‘intelligence’ foremost in terms of the sovereign and universal ‘subject’ also resists the elision of difference. An organological approach as put forward here can (perhaps should[?]) also be intersectional.

That’s as far as I’ve got in my thinking-aloud, I welcome any responses/suggestions and may return to this again.

If you’d like to read more on how this sort of argument might play out in terms of ‘agency’ I blogged a little while ago.

ADD. If this sounds a little like the ‘extended mind‘ (of Clark & Chalmers) or various forms of ‘extended self’ theory then it sort of is. What’s different is the starting assumptions: here, we’re not assuming a given (a priori) ‘mind’ or ‘self’. In Stiegler’s formulation the ‘internal’ isn’t realised til the external is apprehended: mental interior is only recognised as such with the advent of the technical exterior. This is the aporia of origin of ‘the human’ that Stiegler and Derrida diagnose, and that gives rise to the theory of ‘originary technics’. The interior and exterior, and with them the contemporary understanding of the experience of being ‘human’ and what we understand to be technology, are mutually co-constituted – and continue to be so [more here]. I choose to render this in a more-or-less epistemological, rather than ontological manner – I am not so much interested in the categorisation of ‘what there is’, rather in ‘how we know’.

Bernard Stiegler’s Age of Disruption – out soon

Bernard Stiegler being interviewed

Out next year with Polity, this is one of the earlier of Stiegler’s ‘Anthropocene’ books (in terms of publication in French, see also The Neganthropocene) explicating quite a bit of the themes that come out in the interviews I’ve had a go at translating in the past three years (see: “The time saved through automation must be given to the people”; “How to survive disruption”; “Stop the Uberisation of society!“; and “Only by planning a genuine future can we fight Daesh“). Of further interest, to some, is that it also contains a dialogue with Nancy (another Derrida alumnus). This book is translated by the excellent Daniel Ross.

Details on the Polity website. Here’s the blurb:

Half a century ago Horkheimer and Adorno argued, with great prescience, that our increasingly rationalised and Westernised world was witnessing the emergence of a new kind of barbarism, thanks in part to the stultifying effects of the culture industries. What they could not foresee was that, with the digital revolution and the pervasive automation associated with it, the developments they had discerned would be greatly accentuated and strengthened, giving rise to the loss of reason and to the loss of the reason for living. Individuals are overwhelmed by the sheer quantity of digital information and the speed of digital flows, and profiling and social media satisfy needs before they have even been expressed, all in the service of the data economy. This digital reticulation has led to the disintegration of social relations, replaced by a kind of technological Wild West, in which individuals and groups find themselves increasingly powerless, driven by their lack of agency to the point of madness.
How can we find a way out of this situation? In this book, Bernard Stiegler argues that we must first acknowledge our era as one of fundamental disruption and detachment. We are living in an absence of epokh? in the philosophical sense, by which Stiegler means that we have lost our noetic method, our path of thinking and being. Weaving in powerful accounts from his own life story, including struggles with depression and time spent in prison, Stiegler calls for a new epokh? based on public power. We must forge new circuits of meaning outside of the established algorithmic routes. For only then will forms of thinking and life be able to arise that restore meaning and aspiration to the individual.
Concluding with a substantial dialogue between Stiegler and Jean-Luc Nancy in which they reflect on techniques of selfhood, this book will be of great interest to students and scholars in social and cultural theory, media and cultural studies, philosophy and the humanities generally.

Inter-Nation – European Art Research Network conference, 19 Oct 2018

A fence in Mexico City delineating a poor area from a wealthy area

This event looks interesting:

Inter-Nation

European Art Research Network | 2018 Conference

Key-Note speakers include:

Dawn Weleski, Conflict Kitchen, Pittsburgh
Bernard Stiegler, Institut de Recherche et d’Innovation, Paris
Michel Bauwens, P2P Foundation

Other participants include: Louise Adkins, Alistair Alexander / Tactical Tech, Lonnie Van Brummelen, David Capener, Katarzyna Depta-Garapich, Ram Krishna Ranjam, Rafal Morusiewicz, Stephanie Misa, Vukasin Nedeljkovic / Asylum Archive, Fiona Woods, Connell Vaughan & Mick O’Hara, Tommie Soro.

Contributory economies are those exchange networks and peer 2 peer (P2P) communities that seek to challenge the dominant value system inherent to the nation-state. This two-day conference addresses these economies through artistic research.

Since the 2008 financial crisis, alternative economies have been increasingly explored through digital platforms, and artistic and activist practices that transgress traditional links between nation and economy.

Digital networks have the potential to challenge traditional concepts of sovereignty and geo-politics. Central to these networks and platforms is a broad understanding of ‘technology’ beyond technical devices to include praxis-oriented processes and applied knowledges, inherent to artistic forms of research. Due to the aesthetic function of the nation, artistic researchers are critically placed to engage with the multiple registers at play within this conference. The guiding concept of the conference ‘Inter-Nation’ comes from the work of anthropologist Marcel Mauss (‘A Different Approach to Nationhood’, 1920), proposed an original understanding of both concepts that opposes traditional definitions of State and Nationalism. More recently, Michel Bauwens argues for inquiry into the idea of the commons in this context. While, Bernard Stiegler has revisited this definition of the ‘Inter-Nation’ as a broader concept in support of contributory economies emerging in digital culture.

Developed at a crucial time on the island of Ireland, when Brexit is set to redefine relations. The conference engages key thematics emerging out of this situation, such as: digital aesthetics and exchange, network cultures and peer communities, the geo-politics of centre and margin.

The conference will be hosted across three locations within the city centre; Wood Quay Venue for main key-note and PhD researcher presentations; Studio 6 at Temple Bar Gallery & Studios for an evening performance event, and Smithfield Market where a screeing event is hosted at Lighthouse Cinema. 

CFP> Moral Machines? The ethics and politics of the digital world, Helsinki, March 2019

Man with a colander on his head attached to electrodes

This looks like an interesting event, though I’m not entirely sure what Stielger would/will say about “the machine’s capability of non-embodied and non-conscious cognition” ?. Via Twitter.

Moral Machines? The ethics and politics of the digital world

6–8 March 2019, Helsinki Collegium for Advanced Studies, University of Helsinki

With confirmed keynotes from N. Katherine Hayles (Duke University, USA) and Bernard Stiegler (IRI: Institut de Recherche et d’Innovation at the Centre Pompidou de Paris)

As our visible and invisible social reality is getting increasingly digital, the question of the ethical, moral and political consequences of digitalization is ever more pressing. Such issue is too complex to be met only with instinctive digiphilia or digiphobia. No technology is just a tool, all technologies mark their users and environments. Digital technologies, however, mark them much more intimately than any previous ones have done since they promise to think in our place – so that they do not only enhance the homo sapiens’ most distinctive feature but also relieve them from it. We entrust computers with more and more functions, and their help is indeed invaluable especially in science and technology. Some fear or dream that in the end, they become so invaluable that a huge Artificial Intelligence or Singularity will take control of the whole affair that humans deal with so messily.

The symposium “Moral Machines? The Ethics and Politics of the Digital World” welcomes contributions addressing the various aspects of the contemporary digital world. We are especially interested in the idea that despite everything they can do, the machines do not really think, at least not like us. So, what is thinking in the digital world? How does the digital machine “think”? Our both confirmed keynote speakers, N. Katherine Hayles and Bernard Stiegler, have approached these fundamental questions in their work, and one of our aims within this symposium is to bring their approaches together for a lively discussion. Hayles has shown that, for a long time, computers were built with the assumption that they imitate human thought – while in fact, the machine’s capability of non-embodied and non-conscious cognition sets it apart from everything we call thinking. For his part, Bernard Stiegler has shown how technics in general and digital technologies in particular are specific forms of memory that is externalized and made public – and that, at the same time, becomes very different from and alien to individual human consciousness.

We are seeking submissions from scholars studying different aspects of these issues. Prominent work is done in many fields ranging from philosophy and literary studies to political science and sociology, not forgetting the wide umbrella of digital humanities. We hope that the symposium can bring together researchers from the hitherto disconnected fields and thus address the ethics and politics of the digital world in a new and inspiring setting. In addition to the keynotes, our confirmed participants already include Erich Hörl, Fréderic Neyrat and François Sebbah, for instance.

We encourage approaching our possible list of topics (see below) from numerous angles, from philosophical and theoretical to more practical ones. For example, the topics could be approached from the viewpoint of how they have been addressed within the realm of fiction, journalism, law or politics, and how these discourses possibly frame or reflect our understanding of the digital world.

The possible list of topics, here assembled under three main headings, includes but is not limited to:

  • Thinking in the digital world
    • What kind of materiality conditions the digital cognition?
    • How does nonhuman and nonconscious digital world differ from the embodied human thought?
    • How do the digital technologies function as technologies of memory and thought? What kind of consequences might their usage in this capacity have in the long run?
  • The morality and ethics of machines
    • Is a moral machine possible?
    • Have thinking machines made invalid the old argument according to which a technology is only as truthful and moral as its human user? Or can truthfulness and morals be programmed (as the constructors of self-driving cars apparently try to do)?
    • How is war affected by new technologies?
  • The ways of controlling and manipulating the digital world
    • Can and should the digital world be politically controlled, as digital technologies are efficient means of both emancipation and manipulation?
    • How can we control our digital traces and data gathered of us?
    • On what assumptions are the national and global systems (e.g., financial system, global commerce, national systems of administration, health and defense) designed and do we trust them?
    • What does it mean that public space is increasingly administered by technical equipment made by very few private companies whose copyrights are secret?

“Moral Machines? The Ethics and Politics of the Digital World” is a symposium organized by two research fellows, Susanna Lindberg and Hanna-Riikka Roine at the Helsinki Collegium for Advanced Studies, University of Helsinki. The symposium is free of charge, and there will also be a public evening programme with artists engaging the digital world. Our aim is to bring together researchers from all fields addressing the many issues and problems of the digitalization of our social reality, and possibly contribute towards the creation of a research network. It is also possible that some of the papers will be invited to be further developed for publication either in a special journal issue or an edited book.

The papers to be presented will be selected based on abstracts which should not exceed 300 words (plus references). Add a bio note (max. 150 words) that includes your affiliation and email address. Name your file [firstname lastname] and submit it as a pdf. If you which to propose a panel of 3–4 papers, include a description of the panel (max. 300 words), papers (max. 200 words each), and bio notes (max. 150 words each).

Please submit your proposal to moralmachines2019[at]gmail.com by 31 August 2018. Decisions on the proposals will be made by 31 October 2018.

For further information about the symposium, feel free to contact the organizers Susanna Lindberg (susanna.e.lindberg[at]gmail.com) and Hanna-Riikka Roine (hanna.roine[at]helsinki.fi).

Cavell in the LRB

Stanley Cavell

From ‘The Editors’ on the LRB website:

The philosopher Stanley Cavell, who died yesterday at the age of 91, wrote a piece on the Marx Brothers for the LRB in 1993:

Movies magnify, so when pictures began talking they magnified words. Somehow, as in the case of opera’s magnification of words, this made their words mostly ignorable, like the ground, as if the industrialised human species had been looking for a good excuse to get away from its words, or looking for an explanation of the fact that we do get away, even must. The attractive publication, briefly and informatively introduced, of the scripts? of several Marx Brothers films … is a sublime invitation to stop and think about our swings of convulsiveness and weariness in the face of these films; to sense that it is essential to the Brothers’ sublimity that they are thinking about words, to the end of words, in every word – or, in Harpo’s emphatic case, in every absence of words.

Michael Wood reviewed Philosophy the Day after Tomorrow in 2005:

The ordinary slips away from us. If we ignore it, we lose it. If we look at it closely, it becomes extraordinary, the way words or names become strange if we keep staring at them. The very notion turns into a baffling riddle. Shall we say that the ordinary doesn’t exist, or that it exists only when we don’t look at it closely? Stanley Cavell has been thinking about the ordinary (although not only about that) for the whole of his philosophical career, and he knows the riddle inside out. But the riddle is not where his interest lies. He doesn’t mind if the world goes strange on us, as long as we keep looking at it, and he is happy to assert ‘the extraordinariness of what we accept as the ordinary’. The question for him is not a linguistic one, and beyond the simple, slippery word is a whole range of human practices crying out for, but not often getting, our attention.

Reblog> Derrida’s Margins: Inside the personal library of Jacques Derrida

Jacques Derrida

Really interesting… via Stuart Elden.

Derrida’s Margins: Inside the personal library of Jacques Derrida

downloadDerrida’s Margins: Inside the personal library of Jacques Derrida

For Jacques Derrida (1930-2004), reading was an active process: he read texts by thinkers like Rousseau, Heidegger, Lévi-Strauss, Hegel, and Husserl with a writing utensil in hand.  As Derrida affirmed in a late interview, the books in his personal library bear the “traces of the violence of pencil strokes, exclamation points, arrows, and underlining.”

Derrida’s Margins invites scholars to investigate these markings while unpacking the library contained within each of Derrida’s published works, beginning with the landmark 1967 text De la grammatologie (Of Grammatology).  Additional Derrida works will be added as the project continues.

The website catalogues each reference (quotation, citation, footnote, etc.) in De la grammatologie and allows users to explore Derrida’s personal copies of the texts he cites. Due to copyright restrictions, only annotated pages corresponding to references in De la grammatologie are shown here; users may also view external images of each book as well as images of the numerous insertions (post-it notes, bookmarks, calendar pages, index cards, correspondence, notes, etc.) Derrida tipped in to his books.

The website includes the following sections, accessible via the links in the four corners of this page: Derrida’s Library, where users may browse or search Derrida’s copies of the books referenced in De la grammatologieReference List, where users may browse or search the nearly one thousand references to other texts found in the pages of De la grammatologie; Interventions, where users may browse or search Derrida’s annotations, marginalia, and markings that correspond to the references in De la grammatologie; and Visualization, which provides users with alternative ways of exploring the references in De la grammatologie.  Users may search a particular section or the entire site at any time by using the search field at the top of every page.  

The Library of Jacques Derrida is housed at Princeton University Library’s Rare Books and Special Collections.

Workshop on feminist philosophy of technology – Vienna 25-26 Oct ’18

Donna Harraway

Via Mark Coeckelbergh.

Workshop on feminist philosophy of technology

Organizers: Dr. Janina Loh, Prof. Dr. Mark Coeckelbergh
Date: October 25-26, 2018
Venue: University of Vienna, Department of Philosophy, Universitätsstr. 7 (NIG), 1010 Vienna

There has been little attention to feminism and gender issues in mainstream philosophy of technology and vice versa: many feminists have focused on societal matters and relationships without taking into account how technics (i.e. technologies and techniques) shape those societies and relationships. However, since the beginning of the second-wave feminism by the mid 20th century, a growing awareness of the gravity and urgency for a critical reflection of technology and the sciences within feminist discourses can be observed. But feminist thinkers have not throughout interpreted technology and science as potentially emancipatory and liberating in every respect. In the same breath, the inherent structures of dominance, marginalization, and oppression have been confronted and disqualified within the feminist paradigm. The question of defining and ascribing responsibility in science and technics – regarding for instance the technological transformation of labor, the life in the information society, and the relationship between humans and machines – is essential for this workshop. Critical posthumanism and new feminist materialism define promising examples for a reformulation of known challenges such as essentialism and relativism, a transformation of hypostatized perspectives on traditional categories and dichotomies, as well as the claim to think off the beaten paths of socialist, liberal, and radical feminism. Confirmed keynote speakers are Corinna Bath, Rick Dolphijn, Nina Lykke, Kathleen Richardson, Lucy Suchman, and Judy Wajcman. The workshop Feminist Philosophy of Technology has two main aims: It shall present and spark a dialogue between the prominent approaches within feminist philosophy of technology (first day). On the second day we would like to explore and discuss potential challenges and perspectives within current movements of feminist philosophy of technology. We welcome submissions on any – but not limited to – the following issues:

  • critical posthumanism
  • new materialist feminism
  • feminist philosophy of technology
  • technofeminism
  • cyberfeminism
  • xenofeminism
  • human-machine-interaction
  • relationships with machines
  • sexrobotics
  • the future of work (industry 4.0, automatization, digitalization)
  • women, technics, and education
  • women, technics, and arts.

Please submit abstracts of around 500 words to janina.loh@univie.ac.at, by July 30. Acceptance notifications will be sent out by the end of August.

How Technology Changes Us – Ihde & Verbeek [video]

A statue of three men hammering

A video of a fairly accessible discussion of broadly ‘post-phenomenological’ theories of technology with the philosophers Don Ihde & Peter-Paul Verbeek. Via dmf.

How Technology Changes Us – Lecture and discussion with philosophers of technology Don Ihde and Peter-Paul Verbeek

Thursday 11 January 2018 | 19.30 – 21.15 hrs | Theater Hall C, Radboud University

“From the bow and arrow to smartphones, changes come along with every new technology. According to Don Ihde, one of the founders of North American philosophy of technology, technology does not only offer us new opportunities, it also changes our relation to the world. Come and listen as the Dutch philosopher of technology Peter-Paul Verbeek and his mentor Don Ihde talk about philosophy and technology in the past, present, and future.”