Bernard Stiegler on the on the notion of information and its limits

Bernard Stiegler being interviewed

I have only just seen this via the De Montfort Media and Communications Research Centre Twitter feed. The above video is Bernard Stiegler’s ‘key note’ (can’t have been a big conference?) at the University of Kent Centre for Critical Though conference on the politics of Simondon’s Modes of Existence of Technical Objects

In engaging with Simondon’s theory (or in his terms ‘notion’) of information, Stiegler reiterates some of the key elements of his Technics and Time in relation to exosomatisation and tertiary retention being the principal tendency of an originary technics that, in turn, has the character of a pharmakon, that, in more recent work, Stiegler articulates in relation to the contemporary epoch (the anthoropocene) as the (thermodynamic style) tension between entropy and negentropy. Stiegler’s argument is, I think, that Simondon misses this pharmacological character of information. In arguing this out, Stiegler riffs on some of the more recent elements of his project (the trilogy of ‘As’) – the anthropocene, attention and automation – which characterise the contemporary tendency towards proletarianisation, a loss of knowledge and capacities to remake the world.

It is interesting to see this weaving together of various elements of his project over the last twenty(+) years both: in relation to his engagement with Simondon’s work (a current minor trend in ‘big’ theory), and: in relation to what seems to me to be a moral philosophical character to Stiegler’s project, in terms of his diagnosis of the anthropocene and a call for a ‘neganthropocene’.

AI as organ -on|-ology

Kitt the 'intelligent' car from the TV show Knight Rider

Let’s begin with a postulate: there is either no “AI” – artificial intelligence – or every intelligence is, in fact, in some way artificial (following a recent talk by Bernard Stiegler). In doing so we commence from an observation that intelligence is not peculiar to one body, it is shared. A corollary is that there is no (‘human’) intelligence without artifice, insofar as we need to exteriorise thought (or what Stiegler refers to as ‘exosomatisation’) for that intelligence to function – as language, in writing and through tools – and that this is inherently collective. Further, we might argue that there is no AI. Taking that suggestion forward, we can say that there are, rather, forms of artificial (or functional) stupidity, following Alvesson & Spicer (2012: p. 1199), insofar as it inculcates forms of lack of capacity: “characterised by an unwillingness or inability to mobilize three aspects of cognitive capacity: reflexivity, justification, and substantive reasoning”. Following Alvesson & Spicer [& Stiegler] we might argue that such forms of stupidity are necessary passage points through our sense-making in/of the world, thus are not morally ‘wrong’ or ‘negative’. Instead, the forms of functional stupidity derive from technology/techniques are a form of pharmakon – both enabling and disabling in various registers of life.

Given such a postulate, we might categorise “AI” in particular ways. We might identify ‘AI’ not as ‘other’ to the ‘human’ but rather a part of our extended (exosomatic) capacities of reasoning and sense. This would be to think of AI ‘organologically’ (again following Stiegler) – as part of our widening, collective, ‘organs’ of action in the world. We might also identify ‘AI’ as an organising rationale in and of itself – a kind of ‘organon’ (following Aristotle). In this sense “AI” (the discipline, institutions and the outcome of their work [‘an AI’]) is/are an organisational framework for certain kinds of action, through particular forms of reasoning.

It would be tempting (in geographyland and across particular bits of the social sciences) to frame all of this stemming from, or in terms of, an individual figure: ‘the subject’. In such an account, technology (AI) is a supplement that ‘the human subject’ precedes. ‘The subject’, in such an account, is the entity to which things get done by AI, but also the entity ultimately capable of action. Likewise, such an account might figure ‘the subject’ and it’s ‘other’ (AI) in terms of moral agency/patiency. However, for this postulate such a framing would be unhelpful (I would also add that thinking in terms of ‘affect’, especially through neuro-talk would be just as unhelpful). If we think about AI organologically then we are prompted to think about the relation between what is figured as ‘the human’ and ‘AI’ (and the various other things that might be of concern in such a story) as ‘parasitic’ (in Derrida’s sense) – its a reciprocal (and, in Stiegler’s terms, ‘pharmacological’) relation with no a priori preceding entity. ‘Intelligence’ (and ‘stupidity’ too, of course) in such a formulation proceeds from various capacities for action/inaction.

If we don’t/shouldn’t think about Artificial Intelligence through the lens of the (‘sovereign’) individual ‘subject’ then we might look for other frames of reference. I think there are three recent articles/blogposts that may be instructive.

First, here’s David Runciman in the LRB:

Corporations are another form of artificial thinking machine, in that they are designed to be capable of taking decisions for themselves. Information goes in and decisions come out that cannot be reduced to the input of individual human beings. The corporation speaks and acts for itself. Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years.

Second, here’s Jonnie Penn riffing on Runciman in The Economist:

To reckon with this legacy of violence, the politics of corporate and computational agency must contend with profound questions arising from scholarship on race, gender, sexuality and colonialism, among other areas of identity.
A central promise of AI is that it enables large-scale automated categorisation. Machine learning, for instance, can be used to tell a cancerous mole from a benign one. This “promise” becomes a menace when directed at the complexities of everyday life. Careless labels can oppress and do harm when they assert false authority. 

Finally, here’s (the outstanding) Lucy Suchman discussing the ways in which figuring complex systems of ‘AI’-based categorisations as somehow exceeding our understanding does particular forms of political work that need questioning and resisting:

The invocation of Project Maven in this context is symptomatic of a wider problem, in other words. Raising alarm over the advent of machine superintelligence serves the self-serving purpose of reasserting AI’s promise, while redirecting the debate away from closer examination of more immediate and mundane problems in automation and algorithmic decision systems. The spectacle of the superintelligentsia at war with each other distracts us from the increasing entrenchment of digital infrastructures built out of unaccountable practices of classification, categorization, and profiling. The greatest danger (albeit one differentially distributed across populations) is that of injurious prejudice, intensified by ongoing processes of automation. Not worrying about superintelligence, in other words, doesn’t mean that there’s nothing about which we need to worry.
As critiques of the reliance on data analytics in military operations are joined by revelations of data bias in domestic systems, it is clear that present dangers call for immediate intervention in the governance of current technologies, rather than further debate over speculative futures. The admission by AI developers that so-called machine learning algorithms evade human understanding is taken to suggest the advent of forms of intelligence superior to the human. But an alternative explanation is that these are elaborations of pattern analysis based not on significance in the human sense, but on computationally-detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the black box’s operations. And it is humans who must take the responsibility for the cultural assumptions, and political and economic interests, on which those operations are based and for the life-and-death consequences that already follow.

All of these quotes more-or-less exhibit my version of what an ‘organological’ take on AI might look like. Likewise, they illustrate the ways in which we might bring to bear a form of analysis that seeks to understand ‘intelligence’ as having ‘supidity’as a necessary component (it’s a pharmkon, see?), which in turn can be functional (following Alvesson & Spicer). In this sense, the framing of ‘the corporation’ from Runciman and Penn is instructive – AI qua corporation (as a thing, as a collective endeavour [a ‘discipline’]) has ‘parasitical’ organising principles through which play out the pharmacological tendencies of intelligence-stupidity.

I suspect this would also resonate strongly with Feminist Technology Studies approaches (following Judy Wajcman in particular) to thinking about contemporary technologies. An organological approach situates the knowledges that go towards and result from such an understanding of intelligence-stupidity. Likewise, to resist figuring ‘intelligence’ foremost in terms of the sovereign and universal ‘subject’ also resists the elision of difference. An organological approach as put forward here can (perhaps should[?]) also be intersectional.

That’s as far as I’ve got in my thinking-aloud, I welcome any responses/suggestions and may return to this again.

If you’d like to read more on how this sort of argument might play out in terms of ‘agency’ I blogged a little while ago.

ADD. If this sounds a little like the ‘extended mind‘ (of Clark & Chalmers) or various forms of ‘extended self’ theory then it sort of is. What’s different is the starting assumptions: here, we’re not assuming a given (a priori) ‘mind’ or ‘self’. In Stiegler’s formulation the ‘internal’ isn’t realised til the external is apprehended: mental interior is only recognised as such with the advent of the technical exterior. This is the aporia of origin of ‘the human’ that Stiegler and Derrida diagnose, and that gives rise to the theory of ‘originary technics’. The interior and exterior, and with them the contemporary understanding of the experience of being ‘human’ and what we understand to be technology, are mutually co-constituted – and continue to be so [more here]. I choose to render this in a more-or-less epistemological, rather than ontological manner – I am not so much interested in the categorisation of ‘what there is’, rather in ‘how we know’.

CFP> Moral Machines? The ethics and politics of the digital world, Helsinki, March 2019

Man with a colander on his head attached to electrodes

This looks like an interesting event, though I’m not entirely sure what Stielger would/will say about “the machine’s capability of non-embodied and non-conscious cognition” ?. Via Twitter.

Moral Machines? The ethics and politics of the digital world

6–8 March 2019, Helsinki Collegium for Advanced Studies, University of Helsinki

With confirmed keynotes from N. Katherine Hayles (Duke University, USA) and Bernard Stiegler (IRI: Institut de Recherche et d’Innovation at the Centre Pompidou de Paris)

As our visible and invisible social reality is getting increasingly digital, the question of the ethical, moral and political consequences of digitalization is ever more pressing. Such issue is too complex to be met only with instinctive digiphilia or digiphobia. No technology is just a tool, all technologies mark their users and environments. Digital technologies, however, mark them much more intimately than any previous ones have done since they promise to think in our place – so that they do not only enhance the homo sapiens’ most distinctive feature but also relieve them from it. We entrust computers with more and more functions, and their help is indeed invaluable especially in science and technology. Some fear or dream that in the end, they become so invaluable that a huge Artificial Intelligence or Singularity will take control of the whole affair that humans deal with so messily.

The symposium “Moral Machines? The Ethics and Politics of the Digital World” welcomes contributions addressing the various aspects of the contemporary digital world. We are especially interested in the idea that despite everything they can do, the machines do not really think, at least not like us. So, what is thinking in the digital world? How does the digital machine “think”? Our both confirmed keynote speakers, N. Katherine Hayles and Bernard Stiegler, have approached these fundamental questions in their work, and one of our aims within this symposium is to bring their approaches together for a lively discussion. Hayles has shown that, for a long time, computers were built with the assumption that they imitate human thought – while in fact, the machine’s capability of non-embodied and non-conscious cognition sets it apart from everything we call thinking. For his part, Bernard Stiegler has shown how technics in general and digital technologies in particular are specific forms of memory that is externalized and made public – and that, at the same time, becomes very different from and alien to individual human consciousness.

We are seeking submissions from scholars studying different aspects of these issues. Prominent work is done in many fields ranging from philosophy and literary studies to political science and sociology, not forgetting the wide umbrella of digital humanities. We hope that the symposium can bring together researchers from the hitherto disconnected fields and thus address the ethics and politics of the digital world in a new and inspiring setting. In addition to the keynotes, our confirmed participants already include Erich Hörl, Fréderic Neyrat and François Sebbah, for instance.

We encourage approaching our possible list of topics (see below) from numerous angles, from philosophical and theoretical to more practical ones. For example, the topics could be approached from the viewpoint of how they have been addressed within the realm of fiction, journalism, law or politics, and how these discourses possibly frame or reflect our understanding of the digital world.

The possible list of topics, here assembled under three main headings, includes but is not limited to:

  • Thinking in the digital world
    • What kind of materiality conditions the digital cognition?
    • How does nonhuman and nonconscious digital world differ from the embodied human thought?
    • How do the digital technologies function as technologies of memory and thought? What kind of consequences might their usage in this capacity have in the long run?
  • The morality and ethics of machines
    • Is a moral machine possible?
    • Have thinking machines made invalid the old argument according to which a technology is only as truthful and moral as its human user? Or can truthfulness and morals be programmed (as the constructors of self-driving cars apparently try to do)?
    • How is war affected by new technologies?
  • The ways of controlling and manipulating the digital world
    • Can and should the digital world be politically controlled, as digital technologies are efficient means of both emancipation and manipulation?
    • How can we control our digital traces and data gathered of us?
    • On what assumptions are the national and global systems (e.g., financial system, global commerce, national systems of administration, health and defense) designed and do we trust them?
    • What does it mean that public space is increasingly administered by technical equipment made by very few private companies whose copyrights are secret?

“Moral Machines? The Ethics and Politics of the Digital World” is a symposium organized by two research fellows, Susanna Lindberg and Hanna-Riikka Roine at the Helsinki Collegium for Advanced Studies, University of Helsinki. The symposium is free of charge, and there will also be a public evening programme with artists engaging the digital world. Our aim is to bring together researchers from all fields addressing the many issues and problems of the digitalization of our social reality, and possibly contribute towards the creation of a research network. It is also possible that some of the papers will be invited to be further developed for publication either in a special journal issue or an edited book.

The papers to be presented will be selected based on abstracts which should not exceed 300 words (plus references). Add a bio note (max. 150 words) that includes your affiliation and email address. Name your file [firstname lastname] and submit it as a pdf. If you which to propose a panel of 3–4 papers, include a description of the panel (max. 300 words), papers (max. 200 words each), and bio notes (max. 150 words each).

Please submit your proposal to moralmachines2019[at]gmail.com by 31 August 2018. Decisions on the proposals will be made by 31 October 2018.

For further information about the symposium, feel free to contact the organizers Susanna Lindberg (susanna.e.lindberg[at]gmail.com) and Hanna-Riikka Roine (hanna.roine[at]helsinki.fi).

Seminar> Charis Thompson: On the Posthuman in the Age of Automation and Augmentation

Still from the video for All is Love by Bjork

If you happen to be in Exeter on Friday 11th May then I urge you to attend this really interesting talk by Prof. Charis Thompson (UC Berkeley), organised by Sociology & Philosophy at Exeter. Here’s the info:

Guest speaker – Professor Charis Thompson: On the Posthuman in the Age of Automation and Augmentation

A Department of Sociology & Philosophy lecture
Date 11 May 2018
Time 14:00 to 15:15
Place IAIS Building/LT1

Charis Thompson is Chancellor’s Professor, Gender and Women’s Studies and the Center for Science, Technology, Medicine and Society, UC Berkeley, and Professor, Department of Sociology, London School of Economics. She is the author of Making Parents; The Ontological Choreography of Reproductive Technologies (MIT Press 2007), which won the Rachel Carson Prize from the Society of the Social Studies of Science, and of Good Science: The Ethical Choreography of Stem Cell Research (MIT Press 2013). Her book in progress, Getting Ahead, revisits classic questions on the relation between science and democracy in an age of populism and inequality, focusing particularly on genome editing and AI.

She served on the Nuffield Council Working Group on Genome Editing, and serves on the World Economic Forum’s Global Technology Council on Technology, Values and Policy. Thompson is a recipient of UC Berkeley’s Social Science Distinguished Teaching Award.  In 2017, she was awarded an honorary doctorate from the National Science and Technology University of Norway for work on science and society.

SPA PGR Conference Committee
Maria Dede
Aimee Middlemiss
Celia Plender
Elena Sharratt

Geography’s subject

Conceptualisations of a ‘subject’ or subjectivity form part of a theoretical tradition variously theorising who, what and where the ‘human’ is in geography. I don’t want to poorly approximate excellent intellectual histories of human geography (in particular Kevin Cox’s Making Human Geography and Derek Gregory‘s Geographical Imaginations are worth regularly revisiting) but I think it’s nevertheless probably important to remind ourselves of the kinds of geographical imagination with which we continue to make meaning in geography.

Waymarks in the theoretical landscape of geographical tradition might include theories of action, human agency, identity, reflexivity, structure and sovereignty. The latter two on that list might be the most influential in geographical work that took alternative paths to the ‘quantitative revolution’ of the post-WWII period. Political agency and power, considered from all sorts of angles, whether geopolitical or bodily intimate, have formed a longstanding interest for those considering ‘subjecivity’. To pick two key influences for the kind of (Anglophone and basically British) geography I’ve ‘grown up’ in, we can look at the influence of Marx and then literary theory (maybe as assorted flavours of structuralism, post-structuralism, postmodernism etc).

Geographers influenced by Marxian traditions of thought have been perhaps more concerned with the kinds of people who can act or speak in society–who has power, and how. ‘New’ cultural geographers moved towards acknowledging a greater diversity in identities and an attempt to account for a wider gamut of experiences, extending beyond the perceived limits of the ‘human’. The erstwhile reference: The Dictionary of Human Geography contained ‘human agency’ and ‘sovereignty’ entries from the first edition (1981) while an entry for ‘human subjectivity’ did not arrive until the third (1994).

Conceptualisations of ‘the subject’ and subjectivity can be broadly seen to follow the twists and ‘turns’ in geographical thought (don’t take my word for it, look at the entry in the Dictionary of Human Geography). Whereas the figure of the human ‘subject’ of much of mid-20th century geographies carried implications of universalism (homo economicus, or ‘nodes’ in spatial modeling), several theoretical ‘turns’ turned that figure into a problem to be investigated. Perhaps from humanistic geographies onwards, geographers have attempted to wrangle and tease out the contradictions of an all-too-easy to accept ‘simple being’ (Tuan, Space & Place: p. 203). So, for (what Gregory, in Geographical Imaginations calls) ‘post-Marxist’ geographical research the sole subject-positioning of ‘class’ elides too much, such as varying (more or less political) differences in identities, e.g: gender, race and sexuality. There is, of course, lots of work tracing out nuanced arguments for a differentiated and decentred subject, which I cannot hope to do justice to in a blogpost, but maybe we can tease out some of the significant conceptual points of reference.

An attention to the identities and subject positions of those who are not male, not heterosexual, non-white, non-Western and not of the global North is important to subject and subjectivity theorisations. This sort of work mostly occurs in the kinds of geographies collected under sub-disciplinary categories like cultural, development, feminist, political, social (and a long list of) geographies. Postcolonial accounts of subaltern subject-positionings and subjectivities powerfully evoke the processes of Othering and Orientalism, especially drawing upon literary theory (such as work by Homi Bhabha, Edward Said and Gayatri Spivak). Feminist geographers highlighted the masculinity of that ‘simple’ figure of ‘the subject’ and the importance of attending to gender and sex (in particular we might look to Gillian Rose‘s Feminism and Geography and the Women and Geography Study Group of the IBG’s 1984 Geography and Gender [1]). This attention to the forms of difference that may influence subject formation and subject-positioning, especially race and sexuality, has grown into something like a normative element of ‘critical’ geographical thought. Of course, this is not without controversy and contestation. Look at, for example, the negotiations around what it means to hold an RGS-IBG annual conference themed on decolonisation – check out the virtual issue of Transactions for some excellent interventions. Taking this further, some geographers variously inspired by wider movements in social theory seek to ‘decentre’ the (human) subject in favour of approaches that address the complex variety and ‘excessive’ nature of experiences that are not delimited by the individual human.

I’m inclined to identify two further themes in contemporary theorisations of a ‘subject’ and subjectivities in geography, which are considered more or less ‘cultural’: (1) theorising pre- and trans- subjective relations; and (2) attempts to account for more-than-human subjectivities.

First, theories of affect as ‘different models of causality and determination; different models of social relations and agency; [without] different normative understandings of political power’ (as my colleague Clive Barnett says in ‘Political affects in public space‘) attempt to both decentre but also render ontological a figure of ‘the subject’ (for more critical reflections on this sort of thing I recommend exploring Clive’s work). Non-representational or more-than-representational geographies seek to decentre ‘the subject’ by appealing to pre-subjective experiences, focussing on ‘affects’ (just do a search for ‘affect’ in geographical journals and you can see the influence of this way of thinking). ‘Affects‘ are processes that exceed any individual (they are ‘trans-subjective’) and structure possibilities for individual thought and experience, which constitute subject-formations and positionings (this is sometimes considered ‘ontogenetic’, as my colleague John Wylie has argued).

Second, geographers extend analysis to more than ‘human’ experience. Through the infleunce of Science and Technology Studies we have ‘hybrid’ geographies (following Sarah Whatmore) that trouble clear ‘subject’/’object’, and ‘human’/’non-human’, distinctions address distributed forms of agency, such that agency emerges from networks of relations between different ‘actants’, rather than ‘subjects’ (drawing out the influences, and the geographical mash-up, of Actor-Network Theory and sort-of-Deleuzian assemblage theory). A focus on these sorts of more-than-human geographies has for some time been non-human animals as ‘provocateurs’ (See my colleague Henry Buller‘s Progress Reports [1, 2, 3]). The ‘non-human’ is extended beyond the animal to broader forms of life–including plants, bacteria and other non-human living (and dead) matter (for example see the fantastic work of my colleagues in the Exeter Geography Nature Materiality & Biopolitics research group)–and further to the inorganic ‘non-human’ (I guess in terms of the new materialisms currently in fashion, such as Jane Bennett’s Vibrant Matter). Finally, perhaps the most influential trope in contemporary geographical accounts of subjectivity and subject-positions (that I end up reading) renders processes creating a ‘subject’ as, at least in part, coercive and involuntary (more or less following Foucault’s theories of ‘governmentality‘ and ‘subjectification’). This is often elucidated through processes of corporate and state surveillance, many with digital technologies at their heart.

What seems to become clear (to me anyway!) from my ham-fisted listing and attempting to make sense of what on earth geographical understandings of subjectivity might be is the significant turn to ‘ontology’ in a lot of contemporary work. I don’t know whether this is due to styles of research, pressures to write influential (4* etc etc.) journal articles, lack of time for fieldwork and cogitative reflection… but it sort of seems to me that we’re either led by theory, so assuming subjectivity is the right concept and attempting to validate the fairly prescriptive understanding of subjectivity we have in our theory toolkits, or we’re applying a theoretical jelly mold to our data to find ‘affects’, ‘subjectification’ and so on, when maybe, just maybe, there are other things to say about the kinds of experience, the kinds of agency or action, or ways we understand ourselves and one another.

The abstract figure of ‘the subject’ may be the metaphysical, catchall entity attributed with the ability to act, in contradistinction to static ‘objects’. This kind of ‘subject’ is a vessel for the identities, personhood and experiences of different and diverse individuals. It’s funny then to think that one of many concerns expressed about the growth of (big) data-driven ‘personalisation’ and surveillance is it propagates monolithic data-based ‘subjectivities’, we are calculated as our digital shadows and so forth… In this sense, the ‘ontological’ entity of ‘subject’ appears to supplant the multiple, perhaps messy, forms of subjective experience. Then both of these can perhaps displace or elide wider discussions about action or agency (which is an important element of discussions of pragmatism in/and geography).

For clarification purposes, I’ve begun to think about three particular ways of interrogating how geographers approach whatever ‘subjectivity’ is: (1) a conceptual figure: ‘the subject’; (2) particular kinds of role and responsibility as: ‘subject positions’; and (3) kinds of experience as: ‘subjectivities’. Of course, we probably shouldn’t think about these as static categories; in a variety of geographical research they are all considered ongoing processes (as various flavours of geographical theory from Massey to Thrift will attest). So, I suppose we might equally render the above list as what get’s called: (1) ‘subjectification’; (2) ‘subject positioning’; and (3) ‘subjectivities’.

I could witter on, but I’m running out of steam. I want to (albeit clumsily) tie this back to the recent ‘turn’ to (whatever might be meant by) ‘the digital’ though, cos it’s sort of what’s expected of me and cos it may be vaguely interesting. It’s funny to think that the entity (figure, identity, person etc.) these concepts ground is still, inspite of hybrid geographies and STS influences (mostly), ‘human’. Even within science-fiction tales of robots and Artificial Intelligence (AI), as Katherine Hayles highlights, ‘the subject’ is mostly a human figure – the entity that may act to orchestrate the world (there is, of course, lots to unpack concerning what ‘human’ might mean and whether any technology, however autonomous, can be considered properly non-human).

So, all this might boil down to this supposition: within ‘digital geographies’ debates ‘the subject’, especially the data-based ‘subject’, may be usefully thought about as a figure or device of critique rather than an actually existing thing, while ‘subjectivities’, and how we describe their qualities, remain part of a more plural, maybe more intersectional, explanatory vocabulary.

Notes.

1. I can’t find much online about the original, 1984, Gender and Geography book (maybe needs a presence?) but the Gender & Feminist Geography Research Group (what WGSG became) published Gender and Geography Reconsidered, as a CD(!), which is available on the research group’s website.

The Mundane Afrofuturist Manifesto

Via dmf. Definitely worth watching >>


“This dream of utopia can encourage us to forget that outer space will not save us from injustice and that cyberspace was prefigured upon a “master/slave” relationship.

While we are often Othered, we are not aliens.

Though our ancestors were mutilated, we are not mutants.

Post-black is a misnomer.

Post-colonialism is too.

The most likely future is one in which we only have ourselves and this planet.”

The rest is here: http://martinesyms.com/the-mundane-afrofuturist-manifesto/

See also: http://blackradicalimagination.com

‘Ways of Being in a Digital Age’ scoping

I’ve only just caught on here, but the ESRC’s “Ways of Being in a Digital Age” scoping review, for their new theme of the same name, has been awarded to the Liverpool Institute of Cultural Capital (a collaboration between Liverpool and Liverpool John Moores) in a partnership with 17 other institutions (a core of eight in the UK apparently). They say:

The project will undertake a Delphi review of expert opinion and a systematic literature review and overall synthesis to identify gaps in current research.

The project will also run a programme of events to build and extend networks among the academic community, other stakeholders and potential funding partners.

There’s a website, so you can read more there…