Geography’s subject

Conceptualisations of a ‘subject’ or subjectivity form part of a theoretical tradition variously theorising who, what and where the ‘human’ is in geography. I don’t want to poorly approximate excellent intellectual histories of human geography (in particular Kevin Cox’s Making Human Geography and Derek Gregory‘s Geographical Imaginations are worth regularly revisiting) but I think it’s nevertheless probably important to remind ourselves of the kinds of geographical imagination with which we continue to make meaning in geography.

Waymarks in the theoretical landscape of geographical tradition might include theories of action, human agency, identity, reflexivity, structure and sovereignty. The latter two on that list might be the most influential in geographical work that took alternative paths to the ‘quantitative revolution’ of the post-WWII period. Political agency and power, considered from all sorts of angles, whether geopolitical or bodily intimate, have formed a longstanding interest for those considering ‘subjecivity’. To pick two key influences for the kind of (Anglophone and basically British) geography I’ve ‘grown up’ in, we can look at the influence of Marx and then literary theory (maybe as assorted flavours of structuralism, post-structuralism, postmodernism etc).

Geographers influenced by Marxian traditions of thought have been perhaps more concerned with the kinds of people who can act or speak in society—who has power, and how. ‘New’ cultural geographers moved towards acknowledging a greater diversity in identities and an attempt to account for a wider gamut of experiences, extending beyond the perceived limits of the ‘human’. The erstwhile reference: The Dictionary of Human Geography contained ‘human agency’ and ‘sovereignty’ entries from the first edition (1981) while an entry for ‘human subjectivity’ did not arrive until the third (1994).

Conceptualisations of ‘the subject’ and subjectivity can be broadly seen to follow the twists and ‘turns’ in geographical thought (don’t take my word for it, look at the entry in the Dictionary of Human Geography). Whereas the figure of the human ‘subject’ of much of mid-20th century geographies carried implications of universalism (homo economicus, or ‘nodes’ in spatial modeling), several theoretical ‘turns’ turned that figure into a problem to be investigated. Perhaps from humanistic geographies onwards, geographers have attempted to wrangle and tease out the contradictions of an all-too-easy to accept ‘simple being’ (Tuan, Space & Place: p. 203). So, for (what Gregory, in Geographical Imaginations calls) ‘post-Marxist’ geographical research the sole subject-positioning of ‘class’ elides too much, such as varying (more or less political) differences in identities, e.g: gender, race and sexuality. There is, of course, lots of work tracing out nuanced arguments for a differentiated and decentred subject, which I cannot hope to do justice to in a blogpost, but maybe we can tease out some of the significant conceptual points of reference.

An attention to the identities and subject positions of those who are not male, not heterosexual, non-white, non-Western and not of the global North is important to subject and subjectivity theorisations. This sort of work mostly occurs in the kinds of geographies collected under sub-disciplinary categories like cultural, development, feminist, political, social (and a long list of) geographies. Postcolonial accounts of subaltern subject-positionings and subjectivities powerfully evoke the processes of Othering and Orientalism, especially drawing upon literary theory (such as work by Homi Bhabha, Edward Said and Gayatri Spivak). Feminist geographers highlighted the masculinity of that ‘simple’ figure of ‘the subject’ and the importance of attending to gender and sex (in particular we might look to Gillian Rose‘s Feminism and Geography and the Women and Geography Study Group of the IBG’s 1984 Geography and Gender [1]). This attention to the forms of difference that may influence subject formation and subject-positioning, especially race and sexuality, has grown into something like a normative element of ‘critical’ geographical thought. Of course, this is not without controversy and contestation. Look at, for example, the negotiations around what it means to hold an RGS-IBG annual conference themed on decolonisation – check out the virtual issue of Transactions for some excellent interventions. Taking this further, some geographers variously inspired by wider movements in social theory seek to ‘decentre’ the (human) subject in favour of approaches that address the complex variety and ‘excessive’ nature of experiences that are not delimited by the individual human.

I’m inclined to identify two further themes in contemporary theorisations of a ‘subject’ and subjectivities in geography, which are considered more or less ‘cultural’: (1) theorising pre- and trans- subjective relations; and (2) attempts to account for more-than-human subjectivities.

First, theories of affect as ‘different models of causality and determination; different models of social relations and agency; [without] different normative understandings of political power’ (as my colleague Clive Barnett says in ‘Political affects in public space‘) attempt to both decentre but also render ontological a figure of ‘the subject’ (for more critical reflections on this sort of thing I recommend exploring Clive’s work). Non-representational or more-than-representational geographies seek to decentre ‘the subject’ by appealing to pre-subjective experiences, focussing on ‘affects’ (just do a search for ‘affect’ in geographical journals and you can see the influence of this way of thinking). ‘Affects’ are processes that exceed any individual (they are ‘trans-subjective’) and structure possibilities for individual thought and experience, which constitute subject-formations and positionings (this is sometimes considered ‘ontogenetic’, as my colleague John Wylie has argued).

Second, geographers extend analysis to more than ‘human’ experience. Through the infleunce of Science and Technology Studies we have ‘hybrid’ geographies (following Sarah Whatmore) that trouble clear ‘subject’/‘object’, and ‘human’/‘non-human’, distinctions address distributed forms of agency, such that agency emerges from networks of relations between different ‘actants’, rather than ‘subjects’ (drawing out the influences, and the geographical mash-up, of Actor-Network Theory and sort-of-Deleuzian assemblage theory). A focus on these sorts of more-than-human geographies has for some time been non-human animals as ‘provocateurs’ (See my colleague Henry Buller‘s Progress Reports [1, 2, 3]). The ‘non-human’ is extended beyond the animal to broader forms of life—including plants, bacteria and other non-human living (and dead) matter (for example see the fantastic work of my colleagues in the Exeter Geography Nature Materiality & Biopolitics research group)—and further to the inorganic ‘non-human’ (I guess in terms of the new materialisms currently in fashion, such as Jane Bennett’s Vibrant Matter). Finally, perhaps the most influential trope in contemporary geographical accounts of subjectivity and subject-positions (that I end up reading) renders processes creating a ‘subject’ as, at least in part, coercive and involuntary (more or less following Foucault’s theories of ‘governmentality’ and ‘subjectification’). This is often elucidated through processes of corporate and state surveillance, many with digital technologies at their heart.

What seems to become clear (to me anyway!) from my ham-fisted listing and attempting to make sense of what on earth geographical understandings of subjectivity might be is the significant turn to ‘ontology’ in a lot of contemporary work. I don’t know whether this is due to styles of research, pressures to write influential (4* etc etc.) journal articles, lack of time for fieldwork and cogitative reflection… but it sort of seems to me that we’re either led by theory, so assuming subjectivity is the right concept and attempting to validate the fairly prescriptive understanding of subjectivity we have in our theory toolkits, or we’re applying a theoretical jelly mold to our data to find ‘affects’, ‘subjectification’ and so on, when maybe, just maybe, there are other things to say about the kinds of experience, the kinds of agency or action, or ways we understand ourselves and one another.

The abstract figure of ‘the subject’ may be the metaphysical, catchall entity attributed with the ability to act, in contradistinction to static ‘objects’. This kind of ‘subject’ is a vessel for the identities, personhood and experiences of different and diverse individuals. It’s funny then to think that one of many concerns expressed about the growth of (big) data-driven ‘personalisation’ and surveillance is it propagates monolithic data-based ‘subjectivities’, we are calculated as our digital shadows and so forth… In this sense, the ‘ontological’ entity of ‘subject’ appears to supplant the multiple, perhaps messy, forms of subjective experience. Then both of these can perhaps displace or elide wider discussions about action or agency (which is an important element of discussions of pragmatism in/and geography).

For clarification purposes, I’ve begun to think about three particular ways of interrogating how geographers approach whatever ‘subjectivity’ is: (1) a conceptual figure: ‘the subject’; (2) particular kinds of role and responsibility as: ‘subject positions’; and (3) kinds of experience as: ‘subjectivities’. Of course, we probably shouldn’t think about these as static categories; in a variety of geographical research they are all considered ongoing processes (as various flavours of geographical theory from Massey to Thrift will attest). So, I suppose we might equally render the above list as what get’s called: (1) ‘subjectification’; (2) ‘subject positioning’; and (3) ‘subjectivities’.

I could witter on, but I’m running out of steam. I want to (albeit clumsily) tie this back to the recent ‘turn’ to (whatever might be meant by) ‘the digital’ though, cos it’s sort of what’s expected of me and cos it may be vaguely interesting. It’s funny to think that the entity (figure, identity, person etc.) these concepts ground is still, inspite of hybrid geographies and STS influences (mostly), ‘human’. Even within science-fiction tales of robots and Artificial Intelligence (AI), as Katherine Hayles highlights, ‘the subject’ is mostly a human figure – the entity that may act to orchestrate the world (there is, of course, lots to unpack concerning what ‘human’ might mean and whether any technology, however autonomous, can be considered properly non-human).

So, all this might boil down to this supposition: within ‘digital geographies’ debates ‘the subject’, especially the data-based ‘subject’, may be usefully thought about as a figure or device of critique rather than an actually existing thing, while ‘subjectivities’, and how we describe their qualities, remain part of a more plural, maybe more intersectional, explanatory vocabulary.

Notes.

1. I can’t find much online about the original, 1984, Gender and Geography book (maybe needs a presence?) but the Gender & Feminist Geography Research Group (what WGSG became) published Gender and Geography Reconsidered, as a CD(!), which is available on the research group’s website.

The dystopian ‘megacity’ future according to US Defence Dept.

dystopian city

Via the Intercept.

Megacities Urban Future, the “Emerging Complexity,” from Philippe Desrosiers on Vimeo.

According to a startling Pentagon video obtained by The Intercept, the future of global cities will be an amalgam of the settings of “Escape from New York” and “Robocop” — with dashes of the “Warriors” and “Divergent” thrown in. It will be a world of Robert Kaplan-esque urban hellscapes — brutal and anarchic supercities filled with gangs of youth-gone-wild, a restive underclass, criminal syndicates, and bands of malicious hackers.

At least that’s the scenario outlined in “Megacities: Urban Future, the Emerging Complexity,” a five-minute video that has been used at the Pentagon’s Joint Special Operations University. All that stands between the coming chaos and the good people of Lagos and Dhaka (or maybe even New York City) is the U.S. Army, according to the video, which The Intercept obtained via the Freedom of Information Act.

The “Ethics and Governance of Artificial Intelligence Fund” commits $7.6M to research “AI for the public interest”

A press release from the Knight Foundation and another from Omidyar Network highlight their joint effort with several other funders to commit $7.6M to be distributed across several international institutions to research “AI for the public interest”. This seems like an ambitious and interesting research programme, albeit located in the elite institutions one might unfortunately expect to hoover up this sort of funding… Nevertheless, it will be interesting to see what comes of this.

Here’s some snippets (see the full Knight Foundation PR here and the Omidyar PR here).

…a $27 million fund to apply the humanities, the social sciences and other disciplines to the development of AI.

The MIT Media Lab and the Berkman Klein Center for Internet & Society at Harvard University will serve as founding academic institutions for the initiative, which will be named the Ethics and Governance of Artificial Intelligence Fund. The fund will support a cross-section of AI ethics and governance projects and activities, both in the United States and internationally.

The fund seeks to advance AI in the public interest by including the broadest set of voices in discussions and projects addressing the human impacts of AI. Among the issues the fund might address:

  • Communicating complexity: How do we best communicate, through words and processes, the nuances of a complex field like AI?
  • Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?
  • Advancing accountable and fair AI: What kinds of controls do we need to minimize AI’s potential harm to society and maximize its benefits?
  • Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?
  • Expanding the table: How do we grow the field to ensure that a range of constituencies are involved with building the tools and analyzing social impact?

Supporting a Global Conversation

  • Digital Asia Hub (Hong Kong): Digital Asia Hub will investigate and shape the response to important, emerging questions regarding the safe and ethical use of artificial intelligence to promote social good in Asia and contribute to building the fund’s presence in the region. Efforts will include workshops and case studies that will explore the cultural, economic and political forces uniquely influencing the development of the technology in Asia.
  • ITS Rio (Rio de Janeiro, Brazil): ITS Rio will translate international debates on artificial intelligence and launch a series of projects addressing how artificial intelligence is being developed in Brazil and in Latin America more generally. On behalf of the Global Network of Internet and Society Research Center, ITS Rio and the Berkman Klein Center will also co-host a symposium on artificial intelligence and inclusion in Rio de Janeiro, bringing together almost 80 centers and an international set of participants to address diversity in technologies driven by artificial intelligence, and the opportunities and challenges posed by it around the world.

Tackling Concrete Challenges

  • AI Now (New York): AI Now will undertake interdisciplinary, empirical research examining the integration of artificial intelligence into existing critical infrastructures, looking specifically at bias, data collection, and healthcare.
  • Leverhulme Centre for the Future of Intelligence (Cambridge, United Kingdom): Leverhulme Centre for the Future of Intelligence will be focused on bringing together technical and legal perspectives to address interpretability, a topic made urgent by the European Union’s General Data Protection Regulation coming into force next year.
  • Access Now (Brussels, Belgium): Access Now will contribute to the rollout of the General Data Protection Regulation by working closely with data protection authorities to develop practical guidelines that protect user rights, and educate public and private authorities about rights relating to explainability. The organization will also conduct case studies on data protection issues relating to algorithms and artificial intelligence in France and Hungary.

Bolstering Interdisciplinary Work 

  • FAT ML (Global): FAT ML will host a researcher conference focused on developing concrete, technical approaches to securing values of fairness, accountability, and transparency in machine learning.
  • Data & Society (New York): Data & Society will conduct a series of ethnographically-informed studies of intelligent systems in which human labor plays an integral part, and will explore how and why the constitutive human elements of artificial intelligence are often obscured or rendered invisible. The research will produce empirical work examining these dynamics in order to facilitate the creation of effective regulation and ethical design considerations across domains.

##

About the Ethics and Governance of Artificial Intelligence Fund 

The Ethics and Governance of Artificial Intelligence Fund aims to support work around the world that advances the development of ethical artificial intelligence in the public interest, with an emphasis on applied research and education. The fund was launched in January 2017, with an initial investment of $27 million from the John S. and James L. Knight Foundation, Omidyar Network, LinkedIn founder Reid Hoffman, the Hewlett Foundation, and Jim Pallotta. The activities supported through the fund aim to address the global challenges of artificial intelligence from a multidisciplinary perspective—grounded in data, code, and academic analysis. The fund will advance the public understanding of artificial intelligence and support the creation of networks that span disciplines and topics related to artificial intelligence. The Miami Foundation is serving as fiscal sponsor for the fund.

Via Alexis Madrigral.

Do robots replace (human) jobs? – a further update on imaginings of automation

In November I reviewed some literature concerning the narratives of ‘robots’ ‘destroying’ jobs, replacing workers and maybe driving down wages. I showed in that blogpost how different articles contradicted one another about the destruction or creation of jobs/work through automation and argued we probably need to think about this as a discourse, heavy with technological determinisms (following Wyatt) and open to social scientific critique. I went on to look at other stories about automation that emerged in the subsequent five or six months in April and in particular the ways that jobs are becoming ‘unbundled’, the various parts of a job role are being split apart.

In a post earlier today I discussed some of these issues further in relation to the ways these sorts of arguments are storied, and the forms of authority they acquire through the use of statistics. This all goes towards constituting the forms of imagination I discussed in the earlier blogposts – it’s all a part of what I’ve been calling, to make sense of things for myself, a sort of ‘automative imagination’.

I have also today seen an interesting, if lengthy, argument by Mishel and Bivens, of the Economic Policy Institute (considered to be a ‘liberal’ think tank), arguing against some of the recent “robot apocalypse” economic research (in particular showing the widespread misreading of the piece by Acemoglu and Restrepo I highlighted back in April) and the media narratives built around such research:

What is remarkable about this media narrative is that there is a strong desire to believe it despite so little evidence to support these claims. There clearly are serious problems in the labor market that have suppressed job and wage growth for far too long; but these problems have their roots in intentional policy decisions regarding globalization, collective bargaining, labor standards, and unemployment levels, not technology.

This report highlights the paucity of the evidence behind the alleged robot apocalypse, particularly as mischaracterized in the media coverage of the 2017 Acemoglu and Restrepo (A&R) report. Yes, automation has led to job displacements in particular occupations and industries in the past, but there is no basis for claiming that automation has led—or will lead—to increased joblessness, unemployment, or wage stagnation overall. We argue that the current excessive media attention to robots and automation destroying the jobs of the past and leaving us jobless in the future is a distraction from the main issues that need to be addressed: the poor wage growth and inequality caused by policies that have shifted economic power away from low- and moderate-wage workers. It is also the case that, as Atkinson and Wu (2017) argue, our productivity growth is too low, not too high.

[…]

What is remarkable about the automation narrative is that any research on robots or technology feeds it, even if the bottom-line findings of the research do not validate any part of it. The most recent example is the new research by Acemoglu and Restrepo (2017a) on the impact of robots on employment and wages. Here is how The New York Times wrote about this research:

The paper adds to the evidence that automation, more than other factors like trade and offshoring that President Trump campaigned on, has been the bigger long-term threat to blue-collar jobs. The researchers said the findings—“large and robust negative effects of robots on employment and wages”—remained strong even after controlling for imports, offshoring, software that displaces jobs, worker demographics and the type of industry. (Miller 2017)

The EPI article is quite long so I won’t attempt to abridge it here, but the authors do provide some “key findings”, which I do want to reproduce here, see below. I recommend reading the full article though, its an interesting read for anyone who is remotely interested in automation, the nature of work, and the narratives of technology innovation.

Key findings

In this paper we make the following points:

Acemoglu and Restrepo’s new research does not show large and negative effects on overall employment stemming from automation.

  • A&R’s methodology delivers high-quality local estimates of the impact of one sliver of automation (literally looking just at robots). But their translation of these high-quality local estimates (for “commuting zones”) into national effects relies on stylized and largely unrealistic assumptions.
  • Even if one takes the unreliable simulated (not estimated) national effects as given, they are small (40,000 jobs lost each year) relative to any reasonable benchmark. For example, our analysis shows that their estimated job losses from the “China trade shock” are roughly four times as large as their estimated job losses from growing robot adoption in the 2000s.
  • While A&R’s report shows that “robots” are negatively correlated with employment growth across commuting zones, it finds that all other indicators of automation (nonrobot IT investment) are positively correlated or neutral with regard to employment. So even if robots displace some jobs in a given commuting zone, other automation (which presumably dwarfs robot automation in the scale of investment) creates many more jobs. It is curious that coverage of the A&R report ignores this major finding, especially since it essentially repudiates what has been the conventional wisdom for decades—that automation has hurt job growth (at least for less-credentialed Americans).
  • The A&R results do not prove that automation will lead to joblessness in the future or overturn previous evidence that automation writ large has not led to higher aggregate unemployment.

Technological change and automation have not been the main forces driving the wage stagnation and inequality besieging working-class Americans.

  • There is no historical correlation between increases in automation broadly defined and wage stagnation or increasing inequality. Automation—the implementation of new technologies as capital equipment or software replace human labor in the workplace—has been an ongoing feature of our economy for decades. It cannot explain why median wages stagnated in some periods and grew in others, or why wage inequality grew in some periods and shrank in others.
    • Indicators of automation increased rapidly in the late 1990s and the early 2000s, a period that saw the best across-the-board wage growth for American workers in a generation.
    • Indicators of automation fell during two periods of stagnant (or worse) wage growth: from 1973 to 1995 and from 2002 to the present. In these periods, inequality grew as wage growth for the richest Americans far outpaced wage growth of everyone else.
    • During the long period of shared wage growth from the late 1940s to the mid-1970s (shared because all workers’ wages grew at roughly the same pace), indicators of automation also increased rapidly.
  • There is no evidence that automation-driven occupational employment “polarization” has occurred in recent years, and thus no proof it has caused recent wage inequality or wage stagnation.
    • First, numerous studies have documented that there was no occupational employment polarization—in which employment expands in higher-wage and lower-wage occupations while hollowing out in the middle—in the 2000s. Employment has primarily expanded in the lowest-wage occupations. Yet wage inequality between the top and the middle has risen rapidly since 2000.
    • Second, wage inequality overwhelmingly occurs between workers within an occupation, not between workers in different occupations. So even if occupational employment polarization had occurred, it could not explain the growth of wage stagnation or inequality.
  • There is no evidence of an upsurge in automation in the last 10 to 15 years that has affected overall joblessness. The evidence indicates automation has slowed. Trends in productivity, capital investment, information equipment investment, and software investment suggest that automation has decelerated in the last 10 or so years. Also, the rate of shifts in occupational employment patterns has been slower in the 2000s than in any period since 1940. Therefore, there is no empirical support for the prominent notion that automation is currently accelerating exponentially and leading to a robot apocalypse.

The fact that robots have displaced some jobs in particular industries and occupations does not mean that automation has or will lead to increased overall joblessness. 

  • As noted above, data showing a recent deceleration in automation suggest that there is no footprint of an automation surge that can be expected to accelerate in the near future.
  • Technological change and automation absolutely can, and have, displaced particular workers in particular economic sectors. But technology and automation also create dynamics (for example, falling relative prices of goods and services produced with fewer workers) that help create jobs in other sectors. And even when automation’s job-generating and job-displacing forces don’t balance out, government policy can largely ensure that automation does not lead to rising overall unemployment.
    • The narrative that automation creates joblessness is inconsistent with the fact that we had substantial and ongoing automation for many decades but did not have continuously rising unemployment. And the fall in unemployment from 10 percent to below 5 percent since 2010 is inconsistent with the claim that surging automation is generating greater unemployment.
    • As noted above, fluctuations in the pace of technological change have been associated with both good and bad labor market outcomes. So there is no reason to deduce that we should fear robots.

The American labor market has plenty of problems keeping it from working well for most Americans, and these are the problems that should occupy our attention.

  • The problems afflicting American workers include the failure to secure genuine full employment through macroeconomic policy and the intentional policy assault on the bargaining power of low- and middle-wage workers; these are the causes of wage stagnation and rising inequality. Solving these actually existing problems should take precedence over worrying about hypothetical future effects of automation.

See the full article “The zombie robot argument lurches on” on the EPI website.

The ambiguity of sharing images

Two tweets, about 12 hours apart. It seems to me, in an entirely unsystematic, morning coffee kind of analysis, that the two posts demonstrate something of the ambiguity of image sharing practices and circulation of images (on Twitter)… at least in my experience of one platform, Twitter.

The “Grease” tweet, through humour, attempts to comment on contemporary geopolitics. The veracity (or not) of the image possibly doesn’t matter.

The ‘fact check’ nature of the later tweet directly addresses the (lack of) authenticity of the image itself. Showing the ‘original’.

So there’s something about ‘fakeness’ of media, the politics of circulation, something about simulacrum and the convening of publics and maybe something about the ambivalence of image making and sharing practices that falls within the “meme” discourse.

In discussing her work as part of the RGS-IBG ‘digital geographies’ working group symposium about 10 days ago, Gillian Rose discussed the ways in which we may or may not malign the ‘everydayness’ of photographic or image practices and why it remains necessary to study and engage with the everyday practices of meaning-making (there’s a course for this, co-convened by Gillian).

This perhaps prompts some questions about the above tweets. For example, what is it we can or might want to say about the images themselves, their circulation and how they fit into wider, everyday, meaning-making practices? The doctored image fits into a particular aesthetic of ‘memes’ and is contextualised in text in the post, which also goes for the ‘fact check’ tweet too, in a way. How do we interpret the (likely) different intentions behind the thousands of retweets of the above? How might we capture the ‘polymedia’ (following Miller et al.) lives of such images? (Is that even possible?) How might we interrogate what I’m suggesting is the ambivalence of ‘sharing’? I suggest this cannot be served by the mass analysis of image corpora (following Manovich), nor is it really reducible to the ‘attention economy’ – it’s not only about the labour of sharing or the advertising it enables. Instead, I guess what I’m fumbling towards is asking for the analysis of the circulation practices for (copies of) a single image within a network (which may or may not span different platforms).

The danger, I increasingly feel, is that we all-too-quickly resort to super-imposing onto these case studies our ontotheological or ideological meta-narratives – so, it may ‘really’ be about affect, neoliberalism and so on… except of course, it isn’t only about those things, and while they may be important analytical frames they may not address the questions we’re interested in, or should be, posing. I’m not saying such framings are wrong, I’m saying they’re not the only frames of analysis.

All of this leads me to confess that I am beginning to wonder if our ‘digital methods‘ (following Rogers and others) are really up to this sort of task… As yet I’ve not read anything to convince me otherwise, which actually sort of surprises me. The closest I’ve got is the media ethnography work of the outstanding Why We Post project – but, of course, that isn’t particularly a “digital” method, which maybe says something (maybe about my own bias). I’d be interested to know if anyone has any thoughts.

A further thing I wonder is whether or not these sorts of practices will remain stable enough for long enough to warrant the ‘slower’, considered, kinds of research that might enable us to begin to get at answers to my all-too-general, or misplaced, questions above. I remain haunted by undergraduate and masters research into now-defunct platforms and styles of media use… friendster and myspace anyone?

Some relevant links:

Revisiting a 90s future: AHRC/EPSRC Next Generation of Immersive Experiences

glitches image of a 1990s NASA VR experience

Emails have been circulating this week about this so it may already be familiar, but ‘digital’ academics may be interested in this… The below is more or less the text of the emails I’ve seen.

Sources: Teeside / AHRC

The Arts and Humanities Research Council (AHRC) and the Engineering and Physical Sciences Research Council (EPSRC) offer support to interdisciplinary research partnerships with the potential to create new knowledge and address major challenges for the development of the next generation of immersive experiences.Proposals are invited from interdisciplinary research partnerships in the areas of:

– Memory – how can new immersive experiences extend the access, interpretation and reach of memory-based institutions such as museum, galleries, archives and collections?
– Place – what new experiences can be created by the combination of immersive technology and place based services?
– Performance – what new creative practices are enabled by immersive technology, what new experiences can be offered to audiences and how can this transform or extend models of performance?

One of the primary aims of this call is to build interdisciplinary capabilities, collaborations, and partnership which will be well placed to take advantage of future opportunities for research and innovation (both from UKRI and beyond). As such AHRC expect partnerships funded through this call to have a life beyond the end of the funding period.

It is expected that the outputs from the funded projects and the wider cohort will include:

  • Interdisciplinary partnerships including academic researchers and creative economy partners;
  • The potential research questions within this space to have been further developed and refined;
  • Sufficient proof of concept(s) / prototypes / visualisations to have been developed by researchers;
  • Tangible advances in creativity, insights, knowledge and understanding in the area of immersive experiences across one or more of the themes;
  • Working with AHRC, EPSRC, IUK and other funders to understand the complexities of funding research in the immersive experience space and potential barriers to exploitation and scaling up of research outputs.

All outcomes should ensure that the UK creative economy can be a world leader in the conceptualisation, design, production and distribution of commercial and cultural immersive experiences in the future.

This call represents an initial stage of investment in the next generation of immersive experiences. AHRC are currently scoping what form further stages of investment might be, and the outcomes of this call will help to inform these further stages.

The maximum value of applications at 100% FEC is £75,000; the deadline for submissions is 5 October 2017. For further details, see the AHRC website.

The Call Document (PDF, 239KB) provides further information on the context and aims for, and the scope of, the call as well as information on how to submit applications.

The AHRC are running a series of engagement events in early July 2017 to cover this call. Details of these events can be found here

“Digital”, noun

Minister of State for Digital, responsibilities

I tweeted a bit about the name change of the UK government Department for Culture, Media and Sport to Department for Digital, Culture, Media and Sport yesterday. I started off with a throwaway remark about the ongoing transition of the word “digital” to becoming a noun, rather than an adjective.

I think I understand, a little, why this has taken off in some quarters, especially in the civil service and government. It seems, to me, to be about capturing, with one word, the array of, sometimes, loosely related activities undertaken by folks who may have ‘digital’ in their job/unit title and the sorts of things that come under banners such as “digital transformation”.

“Digital” in such contexts might be seen as a managerial or organisational term that describes particular kinds of institutional function/activity. Some of these might have been called something else, previously, but some of the things signified by “digital” are hard to otherwise define, hence the co-opting of an adjective as a noun.

There’s loads more to say about this, but I’m sort of writing this post as a reminder to return to this stuff… maybe…

A communications primer – Eames

Via dmf.

Or “information theory for beginners”, maybe… A heavily stylised interpretation of Shannon’s information theory for IBM, by the Eames’, which may be of interest to those of a cybernetic persuasion.

… it’s a testament to the work of the US mathematician and ‘father of information theory’ Claude Shannon (1916-2001) that his model of communication, laid out in his landmark book A Mathematical Theory of Communication (1949), is still so broadly applicable.

Working from Shannon’s book, in 1953 the iconic husband-and-wife design team Ray and Charles Eames created the short film A Communications Primer for IBM, intending to ‘interpret and present current ideas on communications theory to architects and planners in an understandable way, and encourage their use as tools in planning and design’. Released at the dawn of the personal computer age, the film’s exploration of symbols, signals and ‘noise’ remains thoroughly – almost stunningly – relevant when viewed some 64 years later.

Algorithm 1986

Just cos it’s fun…

From the second edition of The Dictionary of Human Geography, written by Prof. Peter Gould:

A step-by-step procedure, usually supported by a formal mathematical proof, that leads to a desire solution. An example is the Simplex Algorithm in Linear Programming. Heuristic algorithms are not supported by formal proofs, but are highly likely to lead to the optimal solutions. Examples include finding multiple locations within, and the shortest paths through, a network. The word derives from the name Al-Ghorizmeh, a distinguished Arab geographer-mathematician of the sixth century A.D. (See also Districting Algorithm.)   PG

Slightly different from the two-sentence version by Prof. Ron Johnston in the 5th edition (2009).