Bernard Stiegler on disruption & stupidity in education & politics – podcast

Bernard Stiegler being interviewed

Via Museu d’Art Conptemporani de Barcelona.

On the Ràdio Web Macba website there is a podcast interview with philosopher Bernard Stiegler as part of a series to ‘Reimagine Europe’. It covers many of the major themes that have preoccupied Stiegler for the last ten years (if not longer). You can download the pod as an mp3 for free. Please find the blurb below and a link.

In his books and lectures, Stiegler presents a broad philosophical approach in which technology becomes the starting point for thinking about living together and individual fulfilment. All technology has the power to increase entropy in the world, and also to reduce it: it is potentially a poison or cure, depending on our ability to distil beneficial, non-toxic effects through its use. Based on this premise, Stiegler proposes a new model of knowledge and a large-scale contributive economy to coordinate an alliance between social agents such as academia, politics, business, and banks. The goal, he says, is to create a collective intelligence capable of reversing the planet’s self-destructive course, and to develop a plan – within an urgent ten-year time-frame – with solutions to the challenges of the Anthropocene, robotics, and the increasing quantification of life.

In this podcast Bernard Stiegler talks about education and smartphones, translations and linguists, about economic war, climate change, and political stupidity. We also chat about pharmacology and organology, about the erosion of biodiversity, the vital importance of error, and the Neganthropocene as a desirable goal to work towards, ready to be constructed.

Timeline
00:00 Contributory economy: work vs proletarianization
05:21 Our main organs are outside of our body
07:45 Reading and writing compose the republic
12:49 Refounding Knowledge 
15:03 Digital pharmakon 
18:28 Contributory research. Neganthropy, biodiversity and diversification
24:02 The need of an economic peace
27:24 The limits of micropolitics
29:32 Macroeconomics and Neganthropic bifurcation
36:55 Libido is fidelity
42:33 A pharmacological critique of acceleration
46:35 Degrowth is the wrong question

“Merger” by Keiichi Matsuda – automation, work and ‘replacement’

A still from the 360-degree video "Merger" by Keiichi Matsuda
“With automation disrupting centuries-old industries, the professional must reshape and expand their service to add value. Failure is a mindset. It is those who empower themselves with technology who will thrive.
“Merger is a new film about the future of work, from cult director/designer Keiichi Matsuda (HYPER-REALITY). Set against the backdrop of AI-run corporations, a tele-operator finds herself caught between virtual and physical reality, human and machine. As she fights for her economic survival, she finds herself immersed in the cult of productivity, in search of the ultimate interface. This short film documents her last 4 minutes on earth.”

I came across the most recent film by Keichii Matsuda which concerns a possible future of work, with the protagonist embedded in an (aesthetically Microsoft-style) augmented reality of screen-surfaces, and in which the narrative denouement is a sort of trans-human ‘uploading’ moment.

I like Matsuda’s work. i think he skilfully and playfully provokes particular sorts of conversations, mostly about what we used to call ‘immersion’ and the nature of mediation. This has, predictably happened in terms of human vs. AI vs. eschatology (etc etc.) sorts of narratives in various outlets (e.g. the Verge). The first time I encountered his work was at a Passenger Films event at which Rob Kitchin talked about theorisations of mediation in relation to both Matsuda’s work and the (original) Disney film ‘Tron‘.

What is perhaps (briefly) interesting here are two things:

  1. The narrative is a provocative short story that asks us to reflect upon how our world of work and technological development get us from now (the status quo) to an apparent future state of affairs, which carries with it certain kinds of ethical, normative and political contentions. So, this is a story that piggybacks the growing narrative of ‘post-work’ or widespread automation of work by apparently ‘inhuman’ technologies (i.e. A.I) that provokes debate about the roles of ‘technology’ and ‘work’ and what it means to be ‘human’. Interestingly, this (arguably) places “Merger” in the genre of ‘fantasy’ rather than ‘science fiction’ – it is, after all, an eschatological story (I don’t see this final point as a negative). I suppose it could also be seen as a fictional suicide note but I’d rather not dwell on that…
  2. The depiction of the interface and the interaction with the technology-world of the protagonist– and indeed the depiction of these within a 360-degree video –are as important as the story to what the video is signifying. By which I mean – like the videos I called ‘vision videos’ back in 2009/10 (and (in some cases) might be called ‘design fiction’ or ‘diagetic prototypes’) – this video is also trying to show you and perhaps sell you the idea of a technology (Matsuda recently worked for Leap Motion). As I and others have argued – the more familiar audiences are with prospective/speculative technologies the more likely we are (perhaps) to sympathise with their funding/ production/ marketing and ultimately to adopt them.

Call for papers: Geography of/with A.I

Still from the video for All is Love by Bjork

I very much welcome any submissions to this call for papers for the proposed session for the RGS-IBG annual conference (in London in late-August) outlined below. I also welcome anyone getting in touch to talk about possible papers or ideas for other sorts of interventions – please do get in touch.

Call for papers:

We are variously being invited to believe that (mostly Global North, Western) societies are in the cusp, or early stages, of another industrial revolution led by “Artificial Intelligence” – as many popular books (e.g. Brynjolfsson and McAfee 2014) and reports from governments and management consultancies alike will attest (e.g. PWC 2018, UK POST 2016). The goal of this session is to bring together a discussion explicitly focusing on the ways in which geographers already study (with) ‘Artificial Intelligence’ and to, perhaps, outline ways in which we might contribute to wider debates concerning ‘AI’. 

There is widespread, inter-disciplinary analysis of ‘AI’ from a variety of perspective, from embedded systematic bias (Eubanks 2017, Noble 2018) to the kinds of under-examined rationales and work through which such systems emerge (e.g. Adam 1998, Collins 1993) and further to the sorts of ethical-moral frameworks that we should apply to such technologies (Gunkel 2012, Vallor 2016). In similar, if somewhat divergent ways, geographers have variously been interested in the kinds of (apparently) autonomous algorithms or sociotechnical systems are integrated into decision-making processes (e.g. Amoore 2013, Kwan 2016); encounters with apparently autonomous ‘bots’ (e.g. Cockayne et al. 2017); the integration of AI techniques into spatial analysis (e.g. Openshaw & Openshaw 1997); and the processing of ‘big’ data in order to discern things about, or control, people (e.g. Leszczynski 2015). These conversations appear, in conference proceedings and academic outputs, to rarely converge, nevertheless there are many ways in which geographical research does and can continue to contribute to these contemporary concerns.

The invitation of this session is to contribute papers that make explicit the ways in which geographers are (already) contributing to research on and with ‘AI’, to identify research questions that are (perhaps) uniquely geographical in relation to AI, and to thereby advance wider inter-disciplinary debates concerning ‘AI’.

Examples of topics might include (but are certainly not limited to):

  • A.I and governance
  • A.I and intimacy
  • Artificially intelligent mobilities
  • Autonomy, agency and the ethics of A.I
  • Autonomous weapons systems
  • Boosterism and ‘A.I’
  • Feminist and intersectional interventions in/with A.I
  • Gender, race and A.I
  • Labour, work and A.I
  • Machine learning and cognitive work
  • Playful A.I
  • Science fiction, spatial imaginations and A.I
  • Surveillance and A.I

Please send submissions (titles, abstracts (250 words) and author details) to: Sam Kinsley by 31st January 2019.

A genealogy of theorising information technology, through Simondon [video]

Glitched image of a mural of Prometheus giving humans' fire in Freiberg

This post follows from the video of Bernard Stiegler talking about Simondon’s ‘notion’ of information, in relation to his reading of Simondon and others’ theorisation of technogenesis. That paper was a key note in the conference ‘Culture & Technics: The Politics of Du Mode‘, held by the University of Kent’s Centre for Critical Though. It is worth highlighting the whole conference is available on YouTube.

In particular, the panel session with Anne Sauvagnargues and Yuk Hui discussing the genealogy of Simondon’s thought (as articulated in his two perhaps best-known books). For those interested in (more-or-less) French philosophies of technology (largely in the 20th century) this is a fascinating and actually quite accessible discussion.

Sauvagnargues discusses the historical and institutional climate/context of Simondon’s work and Yuk excavates (in a sort of archeological manner) some of the key assumptions and intellectual histories of Simondon’s theorisation of individuation, information and technics.

Bernard Stiegler on the on the notion of information and its limits

Bernard Stiegler being interviewed

I have only just seen this via the De Montfort Media and Communications Research Centre Twitter feed. The above video is Bernard Stiegler’s ‘key note’ (can’t have been a big conference?) at the University of Kent Centre for Critical Though conference on the politics of Simondon’s Modes of Existence of Technical Objects

In engaging with Simondon’s theory (or in his terms ‘notion’) of information, Stiegler reiterates some of the key elements of his Technics and Time in relation to exosomatisation and tertiary retention being the principal tendency of an originary technics that, in turn, has the character of a pharmakon, that, in more recent work, Stiegler articulates in relation to the contemporary epoch (the anthoropocene) as the (thermodynamic style) tension between entropy and negentropy. Stiegler’s argument is, I think, that Simondon misses this pharmacological character of information. In arguing this out, Stiegler riffs on some of the more recent elements of his project (the trilogy of ‘As’) – the anthropocene, attention and automation – which characterise the contemporary tendency towards proletarianisation, a loss of knowledge and capacities to remake the world.

It is interesting to see this weaving together of various elements of his project over the last twenty(+) years both: in relation to his engagement with Simondon’s work (a current minor trend in ‘big’ theory), and: in relation to what seems to me to be a moral philosophical character to Stiegler’s project, in terms of his diagnosis of the anthropocene and a call for a ‘neganthropocene’.

AI as organ -on|-ology

Kitt the 'intelligent' car from the TV show Knight Rider

Let’s begin with a postulate: there is either no “AI” – artificial intelligence – or every intelligence is, in fact, in some way artificial (following a recent talk by Bernard Stiegler). In doing so we commence from an observation that intelligence is not peculiar to one body, it is shared. A corollary is that there is no (‘human’) intelligence without artifice, insofar as we need to exteriorise thought (or what Stiegler refers to as ‘exosomatisation’) for that intelligence to function – as language, in writing and through tools – and that this is inherently collective. Further, we might argue that there is no AI. Taking that suggestion forward, we can say that there are, rather, forms of artificial (or functional) stupidity, following Alvesson & Spicer (2012: p. 1199), insofar as it inculcates forms of lack of capacity: “characterised by an unwillingness or inability to mobilize three aspects of cognitive capacity: reflexivity, justification, and substantive reasoning”. Following Alvesson & Spicer [& Stiegler] we might argue that such forms of stupidity are necessary passage points through our sense-making in/of the world, thus are not morally ‘wrong’ or ‘negative’. Instead, the forms of functional stupidity derive from technology/techniques are a form of pharmakon – both enabling and disabling in various registers of life.

Given such a postulate, we might categorise “AI” in particular ways. We might identify ‘AI’ not as ‘other’ to the ‘human’ but rather a part of our extended (exosomatic) capacities of reasoning and sense. This would be to think of AI ‘organologically’ (again following Stiegler) – as part of our widening, collective, ‘organs’ of action in the world. We might also identify ‘AI’ as an organising rationale in and of itself – a kind of ‘organon’ (following Aristotle). In this sense “AI” (the discipline, institutions and the outcome of their work [‘an AI’]) is/are an organisational framework for certain kinds of action, through particular forms of reasoning.

It would be tempting (in geographyland and across particular bits of the social sciences) to frame all of this stemming from, or in terms of, an individual figure: ‘the subject’. In such an account, technology (AI) is a supplement that ‘the human subject’ precedes. ‘The subject’, in such an account, is the entity to which things get done by AI, but also the entity ultimately capable of action. Likewise, such an account might figure ‘the subject’ and it’s ‘other’ (AI) in terms of moral agency/patiency. However, for this postulate such a framing would be unhelpful (I would also add that thinking in terms of ‘affect’, especially through neuro-talk would be just as unhelpful). If we think about AI organologically then we are prompted to think about the relation between what is figured as ‘the human’ and ‘AI’ (and the various other things that might be of concern in such a story) as ‘parasitic’ (in Derrida’s sense) – its a reciprocal (and, in Stiegler’s terms, ‘pharmacological’) relation with no a priori preceding entity. ‘Intelligence’ (and ‘stupidity’ too, of course) in such a formulation proceeds from various capacities for action/inaction.

If we don’t/shouldn’t think about Artificial Intelligence through the lens of the (‘sovereign’) individual ‘subject’ then we might look for other frames of reference. I think there are three recent articles/blogposts that may be instructive.

First, here’s David Runciman in the LRB:

Corporations are another form of artificial thinking machine, in that they are designed to be capable of taking decisions for themselves. Information goes in and decisions come out that cannot be reduced to the input of individual human beings. The corporation speaks and acts for itself. Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years.

Second, here’s Jonnie Penn riffing on Runciman in The Economist:

To reckon with this legacy of violence, the politics of corporate and computational agency must contend with profound questions arising from scholarship on race, gender, sexuality and colonialism, among other areas of identity.
A central promise of AI is that it enables large-scale automated categorisation. Machine learning, for instance, can be used to tell a cancerous mole from a benign one. This “promise” becomes a menace when directed at the complexities of everyday life. Careless labels can oppress and do harm when they assert false authority. 

Finally, here’s (the outstanding) Lucy Suchman discussing the ways in which figuring complex systems of ‘AI’-based categorisations as somehow exceeding our understanding does particular forms of political work that need questioning and resisting:

The invocation of Project Maven in this context is symptomatic of a wider problem, in other words. Raising alarm over the advent of machine superintelligence serves the self-serving purpose of reasserting AI’s promise, while redirecting the debate away from closer examination of more immediate and mundane problems in automation and algorithmic decision systems. The spectacle of the superintelligentsia at war with each other distracts us from the increasing entrenchment of digital infrastructures built out of unaccountable practices of classification, categorization, and profiling. The greatest danger (albeit one differentially distributed across populations) is that of injurious prejudice, intensified by ongoing processes of automation. Not worrying about superintelligence, in other words, doesn’t mean that there’s nothing about which we need to worry.
As critiques of the reliance on data analytics in military operations are joined by revelations of data bias in domestic systems, it is clear that present dangers call for immediate intervention in the governance of current technologies, rather than further debate over speculative futures. The admission by AI developers that so-called machine learning algorithms evade human understanding is taken to suggest the advent of forms of intelligence superior to the human. But an alternative explanation is that these are elaborations of pattern analysis based not on significance in the human sense, but on computationally-detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the black box’s operations. And it is humans who must take the responsibility for the cultural assumptions, and political and economic interests, on which those operations are based and for the life-and-death consequences that already follow.

All of these quotes more-or-less exhibit my version of what an ‘organological’ take on AI might look like. Likewise, they illustrate the ways in which we might bring to bear a form of analysis that seeks to understand ‘intelligence’ as having ‘supidity’as a necessary component (it’s a pharmkon, see?), which in turn can be functional (following Alvesson & Spicer). In this sense, the framing of ‘the corporation’ from Runciman and Penn is instructive – AI qua corporation (as a thing, as a collective endeavour [a ‘discipline’]) has ‘parasitical’ organising principles through which play out the pharmacological tendencies of intelligence-stupidity.

I suspect this would also resonate strongly with Feminist Technology Studies approaches (following Judy Wajcman in particular) to thinking about contemporary technologies. An organological approach situates the knowledges that go towards and result from such an understanding of intelligence-stupidity. Likewise, to resist figuring ‘intelligence’ foremost in terms of the sovereign and universal ‘subject’ also resists the elision of difference. An organological approach as put forward here can (perhaps should[?]) also be intersectional.

That’s as far as I’ve got in my thinking-aloud, I welcome any responses/suggestions and may return to this again.

If you’d like to read more on how this sort of argument might play out in terms of ‘agency’ I blogged a little while ago.

ADD. If this sounds a little like the ‘extended mind‘ (of Clark & Chalmers) or various forms of ‘extended self’ theory then it sort of is. What’s different is the starting assumptions: here, we’re not assuming a given (a priori) ‘mind’ or ‘self’. In Stiegler’s formulation the ‘internal’ isn’t realised til the external is apprehended: mental interior is only recognised as such with the advent of the technical exterior. This is the aporia of origin of ‘the human’ that Stiegler and Derrida diagnose, and that gives rise to the theory of ‘originary technics’. The interior and exterior, and with them the contemporary understanding of the experience of being ‘human’ and what we understand to be technology, are mutually co-constituted – and continue to be so [more here]. I choose to render this in a more-or-less epistemological, rather than ontological manner – I am not so much interested in the categorisation of ‘what there is’, rather in ‘how we know’.

Bernard Stiegler’s Age of Disruption – out soon

Bernard Stiegler being interviewed

Out next year with Polity, this is one of the earlier of Stiegler’s ‘Anthropocene’ books (in terms of publication in French, see also The Neganthropocene) explicating quite a bit of the themes that come out in the interviews I’ve had a go at translating in the past three years (see: “The time saved through automation must be given to the people”; “How to survive disruption”; “Stop the Uberisation of society!“; and “Only by planning a genuine future can we fight Daesh“). Of further interest, to some, is that it also contains a dialogue with Nancy (another Derrida alumnus). This book is translated by the excellent Daniel Ross.

Details on the Polity website. Here’s the blurb:

Half a century ago Horkheimer and Adorno argued, with great prescience, that our increasingly rationalised and Westernised world was witnessing the emergence of a new kind of barbarism, thanks in part to the stultifying effects of the culture industries. What they could not foresee was that, with the digital revolution and the pervasive automation associated with it, the developments they had discerned would be greatly accentuated and strengthened, giving rise to the loss of reason and to the loss of the reason for living. Individuals are overwhelmed by the sheer quantity of digital information and the speed of digital flows, and profiling and social media satisfy needs before they have even been expressed, all in the service of the data economy. This digital reticulation has led to the disintegration of social relations, replaced by a kind of technological Wild West, in which individuals and groups find themselves increasingly powerless, driven by their lack of agency to the point of madness.
How can we find a way out of this situation? In this book, Bernard Stiegler argues that we must first acknowledge our era as one of fundamental disruption and detachment. We are living in an absence of epokh? in the philosophical sense, by which Stiegler means that we have lost our noetic method, our path of thinking and being. Weaving in powerful accounts from his own life story, including struggles with depression and time spent in prison, Stiegler calls for a new epokh? based on public power. We must forge new circuits of meaning outside of the established algorithmic routes. For only then will forms of thinking and life be able to arise that restore meaning and aspiration to the individual.
Concluding with a substantial dialogue between Stiegler and Jean-Luc Nancy in which they reflect on techniques of selfhood, this book will be of great interest to students and scholars in social and cultural theory, media and cultural studies, philosophy and the humanities generally.

Talk – Plymouth, 17 Oct: ‘New geographies of automation?’

Rachael in the film Blade Runner

I am looking forward to visiting Plymouth (tomorrow) the 17th October to give a Geography department research seminar. It’s been nearly twenty years (argh!) since I began my first degree, in digital art, at Plymouth so I’m looking forward to returning. I’ll be talking about a couple of aspects of ‘The Automative Imagination’ under a slightly different title – ‘New geographies of automation?’ The talk will take in archival BBC and newspaper automation anxieties, management consultant magical thinking (and the ‘Fourth Industrial Revolution’), gendered imaginings of domesticity (with the Jetsons amongst others) and some slightly under-cooked (at the moment) thoughts about how ‘agency’ (what kinds of ‘beings’ or ‘things’ can do what kinds of action).

Do come along if you’re free and happen to be in the glorious gateway to the South West that is Plymouth.

“Decolonizing Technologies, Reprogramming Education” HASTAC 2019 call

Louise Bourgeois work of art

This looks interesting. Read the full call here.

Call for Proposals

On 16-18 May 2019, the Humanities, Arts, Science, and Technology Alliance and Collaboratory (HASTAC), in partnership with the Institute for Critical Indigenous Studies at the University of British Columbia (UBC) and the Department of English at the University of Victoria (UVic), will be guests on the traditional, ancestral, and unceded territory of the h?n?q??min??m?-speaking Musqueam (x?m??k??y??m) people, facilitating a conference about decolonizing technologies and reprogramming education.

Deadline for proposals is Monday 15 October 2018.

Submit a proposal. Please note: This link will take you to a new website (HASTAC’s installation of ConfTool), where you will create a new user account to submit your proposal. Proposals may be submitted in EnglishFrench, or Spanish.


Conference Theme

The conference will hold up and support Indigenous scholars and knowledges, centering work by Indigenous women and women of colour. It will engage how technologies are, can be, and have been decolonized. How, for instance, are extraction technologies repurposed for resurgence? Or, echoing Ellen Cushman, how do we decolonize digital archives? Equally important, how do decolonial and anti-colonial practices shape technologies and education? How, following Kimberlé Crenshaw, are such practices intersectional? How do they correspond with what Grace Dillon calls Indigenous Futurisms? And how do they foster what Eve Tuck and Wayne Yang describe as an ethic of incommensurability, unsettling not only assumptions of innocence but also discourses of reconciliation?

With these investments, HASTAC 2019: “Decolonizing Technologies, Reprogramming Education” invites submissions addressing topics such as:

  • Indigenous new media and infrastructures,
  • Self-determination and data sovereignty, accountability, and consent,
  • Racist data and biased algorithms,
  • Land-based pedagogy and practices,
  • Art, history, and theory as decolonial or anti-colonial practices,
  • Decolonizing the classroom or university,
  • Decolonial or anti-colonial approaches involving intersectional feminist, trans-feminist, critical race, and queer research methods,
  • The roles of technologies and education in the reclamation of language, land, and water,
  • Decolonial or anti-colonial approaches to technologies and education around the world,
  • Everyday and radical resistance to dispossession, extraction, and appropriation,
  • Decolonial or anti-colonial design, engineering, and computing,
  • Alternatives to settler heteropatriarchy and institutionalized ableism in education,
  • Unsettling or defying settler geopolitics and frontiers,
  • Trans-Indigenous activism, networks, and knowledges, and
  • Indigenous resurgence through technologies and education.

Reblog> CFP: 3rd International Geomedia Conference: “Revisiting the Home”

Promotional image for the Curzon Memories app

This conference looks great and has plenty of thematic resonance with a lot going on in geography and other disciplines at the moment. Worth submitting if you can… via Gillian Rose.

Everything below is copied from here.

The 3rd International Geomedia Conference: “Revisiting the Home”
Karlstad, Sweden, 7-10 May 2019

Welcome to the 3rd International Geomedia Conference! The term geomedia captures the fundamental role of media in organizing and giving meaning to processes and activities in space. Geomedia also alludes to the geographical attributes of media, for example flows of digital signals between particular places and the infrastructures carrying those flows. The rapid expansion of mobile media, location-based services, GIS and increasingly complex patterns of surveillance/interveillance has amplified the need for critical studies and theorizations of geomedia. The 3rd Geomedia Conference welcomes contributions (full sessions/panels as well as individual papers) that analyze and problematize the relations between the any and all communication media and various forms of spatial creativity, performance and production across material, cultural, social and political dimensions. Geomedia 2019 provides a genuinely interdisciplinary arena for research carried out at the crossroads of geography, media and film studies. It also builds bridges to such fields as urban studies, rural studies, regional planning, cultural studies and tourism studies.

The special theme of Geomedia 2019 is “Revisiting the Home”. It responds to the prevailing need to problematize the meaning of home in an “era of globalized homelessness”, in times of extended mobility (migration, tourism, multiple homes, etc.) and digital information flows (notably social media). While such ongoing transitions point to a condition where home-making becomes an increasingly liquid and de-territorialized undertaking, there is also a growing preoccupation with questions of what counts as home and who has the right to claim something as (one’s) home. Home is a construct that actualizes the multilayered tensions between belonging, inclusion and security, on the one hand, and alienation, exclusion and surveillance, on the other. The theme of Geomedia 2019 centers on how media are culturally and materially integrated in and reshaping the home-place (e.g., the “smart home” and the “home-office”) and connecting it to other places and spaces. It also concerns the phenomenological and discursive constructions of home, ranging from the intimate social interaction of domestic spaces to the popular (and sometimes politicized) media nostalgia of imagined communities (nation states, homelands, etc.). Ultimately, “Revisiting the Home” addresses the home as a theoretical concept and its implications for geomedia studies. The theme will be addressed through invited keynote talks, a plenary panel, film screenings and artistic installations. Participants are also encouraged to submit proposals for paper sessions addressing the conference theme.

Keynote speakers:
Melissa Gregg – Intel Corporation, USA
Tristan Thielmann – Universität Siegen, Germany

Plenary panel
“Dreaming of Home: Film and Imaginary Territories of the Real”
Nilgun Bayraktar – California College of the Arts
Christine Molloy – Film director and producer, Desperate Optimists
Les Roberts – University of Liverpool
John Lynch (chair) – Karlstad University

Abstract submissions:
Geomedia 2019 welcomes proposals for individual papers as well as thematic panels in English.

Individual paper proposals: The author submits an abstract of 200-250 words. Accepted papers are grouped by the organizers into sessions of 5 papers according to thematic area.
Thematic panel proposals: The chair of the panel submits a proposal consisting of 4-5 individual paper abstracts (200-250 words) along with a general panel presentation of 200-250 words.

Suggested paper topics include, but are not limited to:

  • Art and event spaces
  • Cinematic geographies
  • Cosmopolitanism
  • Everyday communication geographies
  • Epistemologies and methodologies of geomedia
  • Geographies of media and culture industries
  • Geographies of news
  • Geomedia and education
  • Historical perspectives of geomedia
  • Home and belonging
  • Lifestyle and tourism mobilities
  • Locative and spatial media
  • Material geographies of media
  • Media ecologies
  • Mediatization and space
  • Migration and media
  • Mobility and governance
  • Policy mobilities
  • Power geometries and mobility capital
  • Surveillance and spatial control
  • Urban and rural media spaces

Conference timeline
September 24th 2018: Submission system opens
December 10th 2018: Deadline for thematic panel and individual paper proposals
January 25th 2019: Notes of acceptance and registration opens
February 28th 2019: Early Bird pricing ends
March 15th 2019: Last day of registration

Contact: You can reach us at info@geomedia.se

Organizers and venue:
Geomedia 2019 is hosted by the Geomedia Research Group at the Department of Geography, Media and Communication, Karlstad University, Sweden.

Conference director: Lena Grip
Assistant conference director: Stina Bergman
Director of the Geomedia Research Group and chair of scientific committee: André Jansson