Teaching digital geographies – a call

An email was circulated earlier today on behalf of the Digital Geographies Working Group of the RGS-IBG asking the following:

The Digital Geographies Working Group of the RGS/IBG is hoping to develop a resource for those of us interested in teaching ‘digital geographies’.

If you teach a module to undergraduate or postgraduate students that’s about digital geographies, however defined, and have a course handbook or syllabus or website that you’re willing to share by having it posted on the DGWG website, please send a copy to Gillian Rose copying in Jeremy Crampton.

We’ll gather them together and let you know when they’re all available on our website, http://www.digitalrgs.org.

Just thought it may be worth sharing as some people may find this of interest…

Our friends electric

Another wonderful video from superflux exploring how to think about the kinds of relationships we may or may not have with our ‘smart’ stuff…

Our Friends Electric from Superflux on Vimeo.
Our Friends Electric is a short film by Superflux about voice-enabled AI assistants who ask too many questions, swear & recite Marxist texts.

The film was commissioned by Mozilla’s Open IoT Studio. The devices in the film are made Loraine Clarke and Martin Skelly from Mozilla’s Open IoT Studio and the University of Dundee.

For more information about the project visit: http://superflux.in/index.php/work/friends-electric/#

Reblog> Addiction, excess and artists: strategies of resistance

Via Tony Sampson. Looks like a great event from Furtherfield >>

Addiction, excess and artists: strategies of resistance

Are We All Addicts Now? Symposium and Book Launch.

Are we all addicts now?Date: Tuesday 7th November, 6.30 – 9pm

Venue: Central St Martins, University of the Arts London, 1 Granary Square, London, N1C 4AA

Tickets for the event are now available so please feel free to share this info.

http://www.furtherfield.org/programmes/event/are-we-all-addicts-now-symposium-and-book-launch

Here’s the blurb for the panel I’ll be talking on

Addiction, excess and artists: strategies of resistance

Techniques such as neuro-marketing are used online to keep users on device, driving endless circulation and drawing profits from every click. While many artists have celebrated overstimulation and digital excess, others incorporate strategies of resistance into their practice. In a hyper digital world, what are the possibilities for defying techniques such as neuro-marketing, nudging and gamification and what role can artists play in these acts of resistance? 

Reader in Digital Culture Tony D. Sampson explores neuro-marketing and digital addiction 

Artists Katriona Beales and Fiona MacDonald : Feral Practice discuss strategies of resistance from the AWAAN exhibition

Artist and writer Emily Rosamond on reputation addiction and how to resist it 

Reblog> Martin Dodge and Rob Kitchin, Mapping Cyberspace (free book download)

Via Stuart Elden.

Mapping Cyberspace was a formative introduction to ‘geography’ for me as an undergraduate digital arts student. It certainly influenced my (all-too-naive) BSc dissertation ideas… It’s great this is available, it documents so many things that seemed so vital at the time and that now appear almost like peculiar mirages.

Martin Dodge and Rob Kitchin, Mapping Cyberspace (free book download)

Mapping Cyberspace – Martin Dodge & Rob KitchinMartin Dodge and Rob Kitchin’s 2001 book Mapping Cyberspace is now available as a free download. There is also a website about the book here.

Mapping Cyberspace is a ground-breaking geographic exploration and critical reading of cyberspace, and information and communication technologies. The book:

  • * provides an understanding of what cyberspace looks like and the social interactions that occur there
  • * explores the impacts of cyberspace, and information and communication technologies, on cultural, political and economic relations
  • * charts the spatial forms of virtual spaces
  • * details empirical research and examines a wide variety of maps and spatialisations of cyberspace and the information society

has a related website at http://www.MappingCyberspace.com.

This book will be a valuable addition to the growing body of literature on cyberspace and what it means for the future.

Our vascilating accounts of the agency of automated things

Rachael in the film Blade Runner

“There’s no hiding behind algorithms anymore. The problems cannot be minimized. The machines have shown they are not up to the task of dealing with rare, breaking news events, and it is unlikely that they will be in the near future. More humans must be added to the decision-making process, and the sooner the better.”

Alexis Madrigal

I wonder whether we have if not an increasing then certainly a more visible problem with addressing the agency of automated processes. In particular automation that functions predominantly through software, i.e. stuff we refer to as ‘algorithms’ and ‘algorithmic’, possibly ‘intelligent’ or ‘smart’ and perhaps even ‘AI’, ‘machine learning’ and so on.  I read three things this morning that seemed to come together to concretise this thought: Alexis Madrigal’s article in The Atlantic – “Google and Facebook have failed us“, James Somers’ article in The Atlantic – “The coming software apocalypse” and LM Sacacas’ blogpost “Machines for the evasion of moral responsibility“.

In Madrigal’s article we can see how the apparent autonomy of the ‘algorithm’ becomes the fulcrum around which machinations around ‘fake news’, in this case regarding the 2nd October 2017 mass shooting in Las Vegas. The apparent incapacity of an automated software system to perform the kinds of reasoning attributable to a ‘human’ editor is diagnosed on the one hand, and on the other the speed at which such breaking news events taking place and the volume of data being processed by ‘the algorithm’ led to Google admitting that their software was “briefly surfacing an inaccurate 4chan website in our Search results for a small number of queries”. Madrigal asserts:

It’s no longer good enough to shrug off (“briefly,” “for a small number of queries”) the problems in the system simply because it has computers in the decision loop.

In Somers’ article we can see how decisions made by programmers writing software that processed call sorting and volume for the emergency services in Washington State led to the 911 phone system being inaccessible to callers for six hours one night in 2014. As Somers describes:

The 911 outage… was traced to software running on a server in Englewood, Colorado. Operated by a systems provider named Intrado, the server kept a running counter of how many calls it had routed to 911 dispatchers around the country. Intrado programmers had set a threshold for how high the counter could go. They picked a number in the millions.

Shortly before midnight on April 10, the counter exceeded that number, resulting in chaos. Because the counter was used to generating a unique identifier for each call, new calls were rejected. And because the programmers hadn’t anticipated the problem, they hadn’t created alarms to call attention to it. Nobody knew what was happening. Dispatch centers in Washington, California, Florida, the Carolinas, and Minnesota, serving 11 million Americans, struggled to make sense of reports that callers were getting busy signals. It took until morning to realize that Intrado’s software in Englewood was responsible, and that the fix was to change a single number.

Quoting an MIT Professor of aeronautics (of course) Nancy Leveson, Somers observes: “The problem,” Leveson wrote in a book, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”

Michael Sacasas in his blogpost refers to Madrigal’s article and draws out arguments that the complex processes of software development, maintenance and the large and complicated organisations such as Facebook are open to those working there to work in a ‘thoughtless’ manner:

“following Arendt’s analysis, we can see more clearly how a certain inability to think (not merely calculate or problem solve) and consequently to assume moral responsibility for one’s actions, takes hold and yields a troubling and pernicious species of ethical and moral failures. …It would seem that whatever else we may say about algorithms as technical entities, they also function as the symbolic base of an ideology that abets thoughtlessness and facilitates the evasion of responsibility.”

The simplest version of what I’m getting at is this: on the one hand we attribute significant agency to automated software processes, this usually involves talking about ‘algorithms’ as quasi- or pretty much autonomous, which tends to infer that whatever it is we’re talking about, e.g. “Facebook’s algorithm”, is ‘other’ to us, ‘other’ to what might conventionally be characterised as ‘human’. On the other hand we talk about how automated processes can encode the assumptions and prejudices of the creators of those techniques and technologies, such as the ‘racist soap dispenser‘.

There’s a few things we can perhaps note about these related but potentially contradictory narratives.

First, they perhaps infer that the moment of authoring, creating, making, manufacturing is a one-off event – the things are made, the software is written and it becomes set, a bit like baking a sponge cake – you can’t take the flour, sugar, butter and eggs out again. Or, in a more nuanced version of this point – there is a sense that once set in train these things are really, really hard to change, which may, of course, be true in particular cases but also may not be a general rule. A soap dispenser’s sensor may be ‘hard coded’ to particular tolerances, whereas what gets called ‘Facebook’s algorithm’, while complicated, is probably readily editable (albeit with testing, version control and so on). This kind of narrative freights a form of determinism – there is an implied direction of travel to the technology.

Second, the kinds of automated processes I’m referring to here, ‘algorithms’ and so on, get ‘black boxed’. This is not only on the part of those who create, operate and benefit from those processes—like those frequently referred to Google, Facebook, Amazon and so on—but also in part by those who seek to highlight the black boxing. As Sacasas articulates: “The black box metaphor tries to get at the opacity of algorithmic processes”. He offers a quote from a series of posts by Kevin Hamilton which illustrates something of this:

Let’s imagine a Facebook user who is not yet aware of the algorithm at work in her social media platform. The process by which her content appears in others’ feeds, or by which others’ material appears in her own, is opaque to her. Approaching that process as a black box, might well situate our naive user as akin to the Taylorist laborer of the pre-computer, pre-war era. Prior to awareness, she blindly accepts input and provides output in the manufacture of Facebook’s product. Upon learning of the algorithm, she experiences the platform’s process as newly mediated. Like the post-war user, she now imagines herself outside the system, or strives to be so. She tweaks settings, probes to see what she has missed, alters activity to test effectiveness. She grasps at a newly-found potential to stand outside this system, to command it. We have a tendency to declare this a discovery of agency—a revelation even.

In a similar manner to the imagined participant in Searle’s “Chinese Room” thought experiment, the Facebook user can only guess at the efficacy of their relation to the black boxed process. ‘Tweaking our settings’ and responses might, as Hamilton suggest, “become a new form of labor, one that might then inevitably find description by some as its own black box, and one to escape.” A further step here is that even those of us diagnosing and analysing the ‘black boxes’ are perhaps complicit in keeping them in some way obscure. As Evan Selinger and Woodrow Hartzog argue: things that are obscure can be seen as ‘safe’, which is the principle of cryptography. Obscurity, for Selinger & Hartzog, “is a protective state that can further a number of goals, such as autonomy, self-fulfillment, socialization, and relative freedom from the abuse of power”. Nevertheless, obscurity can also be an excuse – the black box is impenetrable, not open to analysis and so we settle on other analytic strategies or simply focus on other things. A well-worn strategy seems to be to retreat to the ontological, to which I’ll return shortly.

Third, following from above, perhaps the ways in which we identify ‘black boxes‘ or the forms of black boxing we do ourselves over-simplifies or elides complexity. This is a difficult balancing act. A good concept becomes a short-hand that freights meaning in useful ways. However, there is always potential that it can hide as much as it reveals. In the case of the phenomena outlined in the two articles above, we perhaps focus on the ends, what we think ‘the algorithm’ does – the kinds of ‘effects’ we see, such as ‘fake news’ and the breakdown of an emergency telephone system, or even a ‘racist soap dispenser’. It is then very tempting to perform what Sally Wyatt calls a ‘justifactory’ technological determinism – not only is there a ’cause and effect’ but these things were bound to happen because of the kinds of technological processes involved. By fixing ‘algorithms’ as one kind of thing, we perhaps elide the ways in which they can be otherwise and, perhaps more seriously, elide the parts of the process of the development, resources, use and reception of those technologies and their integration into wider sociotechnical systems and society. These things don’t miraculously appear from nowhere – they are the result of lots of actions and decisions, some banal, some ‘strategic’, some with good intentions and some perhaps morally-questionable. By black boxing ‘the algorithm’, attributing ‘it’ with agency and making it ‘other’ to human activities we ignore or obscure the organisational processes that make it possible at all. I argue we cannot see these things as completely one thing or the other: the black boxed entity or the messy sociotechnical system, but rather as both and need to accommodate that sort of duality in our approaches to explanation.

Fourth, normative judgements are attached to the apparent agency of an automated system when it is perceived as core to the purpose of the business. Just like any other complicated organisation whose business becomes seen as a ‘public good’ (energy companies might be another example), competing, perhaps contradictory, narratives take hold. The purpose of the business may be to make money–in the case of Google and Facebook this is of course primarily through advertising, requiring attractive content to which to attach adverts–but the users perhaps consider their experience, which is ‘free’, more important. It seems to have become received wisdom that the very activities that drive the profits of the company, by boosting content that drives traffic and therefore serves more advertising and I assume therefore resulting in more revenue, run counter to accepted social and moral norms. This exemplifies the competing understandings of what companies like Google and Facebook do – in other words, what their ‘algorithms’ are for. This has a bearing on the kinds of stories we then tell about the perceived, or experienced, agency of the automated system.

Finally (for now), there is a tendency for academic social scientific studies of automated software systems to resort to ontological registers of analysis. There may be all sorts of reasons used as justification for this, such as specific detail of a given system is not accessible, or (quite often) only accessible through journalists, or the funding isn’t available to do the research. However, it also pays dividends to do ‘hard’ theory. In the part of academia I knock about in, geography-land and it’s neighbours, technology has been packaged up into the ‘non-human’ whereby the implication is that particular kinds of technology are entirely separate from us, humans, and can be seen to have ‘effects’ upon us and our societies. This is trendy cos one can draw upon philosophy that has long words and hard ideas in it, in particular: ‘object oriented ontology‘ (to a much lesser extent the ‘bromethean‘ accellerationists). The generalisable nature of ‘big’ theory is beguiling, it seems to permit us to make general, perhaps global, claims and often results in a healthy return in the academic currency of citations. Now, I too am guilty of resorting to theory, which is more or less abstract, through the work of Bernard Stiegler in particular, but I’d like to think I haven’t disappeared down the almost theological rabbit hole of trying to think objects in themselves through abstract language such as ‘units‘ or ‘allopoetic objects‘ and ‘perturbations’ of non-human ‘atmospheres’.

It seems to me that while geographers and others have been rightly critical of simplistic binaries of human/technical, there remains a common habit of referring to a technical system that has been written by and is maintained by ‘humans’ as other to whatever that ‘human’ apparently is, and to refer to technologically mediated activities as somehow extra-spatial, as virtual, in contra-distinction to a ‘real’. This is plainly a contradiction. On the one hand this positions the technology in question (‘algorithms’ and so on) as totally distinct from us, imbued with an ability to act without us and so potentially powerful. On the other hand if that technology is ‘virtual’ and not ‘real’ it implies it doesn’t count in some way. While in the late 90s and early 00s the ‘virtual’ technologies we discussed were often seen as somewhat inconsequential, the more contemporary concerns about ‘fake news’, malware and encoded prejudices (such as racism) have made automated software systems part of the news cycle. I don’t think it is a coincidence that we’ve moved from metaphors of liberty and community online to metaphors of ‘killer robots’, like the Terminator (of course there is a real prospect of autonomous weapons systems, as discussed elsewhere).

In the theoretical zeal of ‘decentering the human subject’ and focusing on the apparent alterity of technology, as abstract ‘objects’, we are at risk of failing to address the very concerns which are expressed in the articles by Madrigal and Somers. In a post entitled ‘Resisting the habits of the algorithmic mind‘, Sacasas suggests that automated software systems (‘algorithms’) are something like an outsourcing of problems solving ‘that ordinarily require cognitive labor–thought, decision making, judgement. It is these very activities–thinking, willing, and judging–that structure Arendt’s work in The Life of the Mind.’ The prosthetic capacity of technologies like software to in some way automate some of these processes might be liberating but they are also, as Sacasas suggests, morally and politically consequential. To ‘outsource the life of the mind’ for Sacasas means to risk being ‘habituated into conceiving of the life of the mind on the model of the problem-solving algorithm’. A corollary to this supposition I would argue is that there is a risk in the very diagnosis of this problem that we habituate ourselves to a determinism as well. As argued in the third point, above, we risk obscuring the organisational processes that make such sociotechnical systems possible at all. In the repetition of arguments that autonomous, ‘non-human’, ‘algorithms’ are already apparently doing all of these problematic things we will these circumstances upon ourselves. There is, therefore, an ethics to thinking about and analysing automation too.

Where does this leave us? I think it leaves us with some critical tools and tasks. We perhaps need not to shy away from the complexity of the systems we discuss – the ideas and words we use can do work for us, ‘algorithm’ for example freights some meaning, but we perhaps need to be careful we don’t obscure as much as we reveal. We perhaps need to use more, not fewer, metaphors. We definitely need more studies that get at the specificity of particular forms, processes and work of automation/automated systems. All of us, journalists and academics alike, need to perhaps use our words more carefully, or use more words to get at the issues.

Simply hailing the ‘rise of the robots’ is not enough. I think this reproduces an imagination of automation that is troubling and ought to be questioned (what I’ve called an ‘automative imaginary’ elsewhere, but maybe that’s too prosaic). For people like me in geography-land to retreat into ‘high’ theory and to only discuss abstract ontological/ metaphysical attributes of technology seems to me to be problematic and is a retreat from that part of the ‘life of the mind’ we claim to advance. I’m not arguing we need not retreat from theory we simply need to find a balance. A crucial issue for social science researchers of ‘algorithms’ and so on is that this sort of work is probably not the work of a lone wolf scholar, I increasingly suspect that it needs multi-disciplinary teams. It also needs to, at least in part, produce publicly accessible work (in all senses of ‘accessible’). In this sense work like the report on ‘Media manipulation and disinformation online‘ by Data & Society seems like necessary (but by no means the only) sorts of contribution. Prefixing your discipline with ‘digital’ and reproducing the same old theory but applied to ‘digital’ things won’t, I think, cut it.

Scaring you into ‘digital safety’

I’ve been thinking about the adverts from Barclays about ‘digital safety’ in relation to some of the themes of the module about technology that I run. In particular, about the sense of risk and threat that sometimes gets articulated about digital media and how this maybe carries with it other kinds of narrative about technology, like versions of determinism for instance. The topics of the videos are, of course, grounded in things that do happen – people do get scammed and it can be horrendous. Nevertheless, from an ‘academic’ standpoint, it’s interesting to ‘read’ these videos as particular articulations of perceived norms around safety/risk and responsibility, for whom, and why in relation to ‘digital’ media… I could write more but I’ll just post a few of the videos for now…

“I’m so excited to join a botnet”

Glitched image of a sous-vide machine

From NY magazine:

I didn’t just buy a sous-vide circulator, I also bought what could very likely turn into a new zombie member of a botnet nobody knows about yet. (A botnet, to refresh your memory, is a group of many disparate internet-enabled computers whose security has been remotely compromised, enabling hackers to network them together and use their combined power for nefarious purposes.)

I do not actually know that my sous-vide circulator will be hacked remotely in order to power a Low Orbit Ion Cannon (popular software for launching a distributed denial-of-service attack used to take websites off the internet temporarily), but if it did happen, I would not be surprised. Oftentimes, the computers — usually very primitive computers of the kind found in security cameras, smart-home light bulbs, and cooking appliances — function normally while these processes run in the background. Perhaps my precision cooker will be attacking a major DNS server while I poach a perfect egg. Or maybe it will help take down a dissident forum as I prepare a cut of steak for the grill. The possibilities are endless.

CFP: Workshop on Trustworthy Algorithmic Decision-Making

Not sure where I found this, but it may be of interest…

Workshop on Trustworthy Algorithmic Decision-Making
Call for Whitepapers

We seek participants for a National Science Foundation sponsored workshop on December 4-5, 2017 to work together to better understand algorithms that are currently being used to make decisions for and about people, and how those algorithms and decisions can be made more trustworthy. We invite interested scholars to submit whitepapers of no more than 2 pages (excluding references); attendees will be invited based on whitepaper submissions. Meals and travel expenses will be provided.

Online algorithms, often based on data-driven machine-learning approaches, are increasingly being used to make decisions for and about people in society. One very prominent example is the Facebook News Feed algorithm that ranks posts and stories for each person, and effectively prioritizes what news and information that person sees. Police are using “predictive policing” algorithms to choose where to patrol, and courts are using algorithms that predict the likelihood of repeat offending in sentencing. Face recognition algorithms are being implemented in airports in lieu of ID checks. Both Uber and Amazon use algorithms to set and adjust prices. Waymo/Google’s self-driving cars are using Google maps not just as a suggestion, but to actually make route choices.

As these algorithms become more integrated into people’s lives, they have the potential to have increasingly large impacts. However, if these algorithms cannot be trusted to perform fairly and without undue influences, then there may be some very bad unintentional effects. For example, some computer vision algorithms have mis-labeled African Americans as “gorillas”, and some likelihood of repeat offending algorithms have been shown to be racially biased. Many organizations employ “search engine optimization” techniques to alter the outcomes of search algorithms, and “social media optimization” to improve the ranking of their content on social media.

Researching and improving the trustworthiness of algorithmic decision-making will require a diverse set of skills and approaches. We look to involve participants from multiple sectors (academia, industry, government, popular scholarship) and from multiple intellectual and methodological approaches (computational, quantitative, qualitative, legal, social, critical, ethical, humanistic).

Whitepapers

To help get the conversation started and to get new ideas into the workshop, we solicit whitepapers of no more than two pages in length that describe an important aspect of trustworthy algorithmic decision-making. These whitepapers can motivate specific questions that need more research; they can describe an approach to part of the problem that is particularly interesting or likely to help make progress; or they can describe a case study of a specific instance in the world of algorithmic decision-making and the issues or challenges that case brings up.

Some questions that these whitepapers can address include (but are not limited to):

  • What does it mean for an algorithm to be trustworthy?
  • What outcomes, goals, or metrics should be applied to algorithms and algorithm-made decisions (beyond classic machine-learning accuracy metrics)?
  • What does it mean for an algorithm to be fair? Are there multiple perspectives on this?
  • What threat models are appropriate for studying algorithms? For algorithm-made decisions?
  • What are ways we can study data-driven algorithms when researchers don’t always have access to the algorithms or to the data, and when the data is constantly changing?
  • Should algorithms that make recommendations be held to different standards than algorithms that make decisions? Should filtering algorithms have different standards than ranking or prioritization algorithms?
  • When systems use algorithms to make decisions, are there ways to institute checks and balances on those decisions? Should we automate those?
  • Does transparency really achieve trustworthiness? What are alternative approaches to trusting algorithms and algorithm-made decisions?

Please submit white papers along with a CV or current webpage by October 9, 2017 via email to trustworthy-algorithms@bitlab.cas.msu.edu. We plan to post whitepapers publicly on the workshop website (with authors’ permission) to facilitate conversation ahead of, at, and after the workshop. More information about the workshop can be found at http://trustworthy-algorithms.org.

We have limited funding for PhD students interested in these topics to attend the workshop. Interested students should also submit a whitepaper with a brief description of their research interests and thoughts on these topics, and indicate in their email that they are PhD students.

CFP: Theorising digital space

glitches image of a 1990s NASA VR experience

In another of a series of what feels dangerously like back-to-the-1990s moments as some geographers attempt to wrangle ‘digital geographies’ as a brand, which I find problematic, I saw the below CFP for the AAG.

I am sorry if it seems like I’m picking on this one CFP, I have no doubt that it was written with the best of intentions and if I were able to attend the conference I would apply to speak and attend it. I hope others will too. In terms of this post it’s simply the latest in a line of conference sessions that I think unfortunately seem to miss, or even elide, long-standing debates in geography about mediation.

Maybe my reaction is in part because I cannot attend (I’m only human, I’d quite like to go to New Orleans!), but it is also in part because I am honestly shocked at the inability for debates within what is after all a fairly small discipline to move forward in terms of thinking about ‘space’ and mediation. This stands out because it follows from ‘digital’ sessions at the AAG last year that made similar sorts of omissions.

In the late 1990s a whole host of people theorised place/space in relation to what we’re now calling ‘the digital’. Quite a few were geographers. There exists a significant and, sometimes, sophisticated literature that lays out these debates, ranging from landmark journal articles to edited books and monographs that all offer different views on how to understand mediation spatially (some of this work features in a bibliography I made ages ago).

Ironically, perhaps, all of this largely accessible ‘online’, you only need search for relevant key terms, follow citation chains using repositories – much of it is there, many of the authors are accessible ‘digitally’ too. And yet, periodically, we see what is in effect the same call for papers asking similar questions: is there a ‘physical’/’digital’ binary [no], what might it do, how do we research the ‘digital’, ‘virtual’ etc. etc.

We, all kinds of geographers, are not only now beginning to look at digital geographies, it’s been going on for some time and it would be great if that were acknowledged in the way that Prof. Dorothea Kleine did with rare clarity in her introduction to the RGS Digital Geographies Working Group symposium earlier this year (skip to 03:12 in this video).

So, I really hope that some of those authors of books like “Virtual Geographies“, to take just one example (there are loads more – I’m not seeking to be canonical!), might consider re-engaging with these discussions to lend some of perspective that they have helped accrue over the last 20+ years and speak at, or at least attend, sessions like this.

I hope that others will consider speaking in this session, to engage productively and to open out debate, rather than attempt to limit it in a kind of clique-y brand.

Theorizing Place and Space in Digital Geography: The Human Geography of the Digital Realm

In 1994 Doreen Massey released Space, Place and Gender, bringing together in a single volume her thoughts on many of the key discussions in geography in the 1980s and early 1990s. Of note was the chapter, A global sense of place, and the discussion on what constitutes a place. Massey argues that places, just like people, have multiple identities, and that multiple identities can be placed on the same space, creating multiple places inside space. Places can be created by different people and communities, and it is through social practice, particularly social interaction, that place is made. Throughout this book, Massey also argues that places are processional, they are not frozen moments, and that they are not clearly defined through borders. As more and more human exchanges in the ‘physical realm’ move to, or at least involve in some way, the ‘digital realm’, how should we understand the sites of the social that happen to be in the digital? What does a human geography, place orientated understanding of the digital sites of social interaction tell us about geography? Both that in the digital and physical world.

Massey also notes that ‘communities can exist without being in the same place – from networks of friends with like interests, to major religious, ethnic or political communities’. The ever-evolving mobile technologies, the widening infrastructures that support them and the increasing access to smartphones, thanks in part to new smart phone makers in China releasing affordable yet powerful smartphones around the world, has made access to the digital realm, both fixed in place (through computers) and, as well as more often, through mobile technologies a possibility for an increasing number of people worldwide. How do impoverished or excluded groups use smart technologies to (re)produce place or a sense of place in ways that include links to the digital realm? From rural farming communities to refugees fleeing Syria and many more groups, in what ways does the digital realm afford spatial and place making opportunities to those lacking in place or spatial security?

How are we to understand the digital geographies of platforms and the spaces that they give us access to? Do platforms themselves even have geographies? Recently geographers such as Mark Graham have begun a mapping of the dark net, but how should we understand the geographies of other digital spaces, from instant messaging platforms to social media or video streaming websites? What is visible and what is obscured? And what can we learn about traditional topics in social science, such as power and inequality, when we begin to look at digital geographies?

In this paper session for 5 papers we are looking for papers exploring:

  • Theories of place and space in the digital realm, including those that explore the relationship between the digital and physical realms
  • Research on the role of digital realm in (re)producing physical places, spaces and communities, or creating new places, spaces and communities, both in the digital realm and outside of it.
  • Papers considering relationship between physical and digital realms and accounts of co-production within them.
  • The role of digital technologies in providing a sense of space and place, spatial security and secure spaces and places to those lacking in these things.
  • Research exploring the geographies of digital platforms, websites, games or applications, particularly qualitative accounts that examine the physical and digital geographies of platforms, websites, games or applications.
  • Research examining issues of power, inequality, visibility and distance inside of the digital realm.