Category Archives: technology

Bernard Stiegler – Digital shadows and (en)light(enment)

The translation below is the second half of the “Net Blues” interview with Bernard Stiegler conducted by the Le Monde blog “Lois des réseaux” [Laws of the networks].

In this second-half of the interview Stiegler discusses how the web should evolve developing the trope of ‘enlightenment’ (which he has significantly discussed in Taking care of youth and the generations), drawing out the play on words between light and shadow and ‘the Enlightenment’. Highlighting the web as the latest stage in publishing technologies (which have historically been central to political movements), Stiegler argues a new industrial politics must be developed, by Europe, as the ‘curative’ counter to the ‘toxic’ trend towards automation and homogeneity brought about by computation. This is the ‘pharmacological’ character of the internet Stiegler discusses in the ‘Net Blues‘. The new industrial politics Stiegler argues for has universities and the production of knowledge at its heart.

As usual clarifications or questions over the translation of a particular word are in square brackets and all emphasis is in the original text.

Digital shadows and (en)light(enment)

What do you think needs to be happen for the web to evolve?

Bernard Stiegler: I think that the web today is an entropic system. It first appeared, for most of us, not least for myself, as a negentropic opportunity, that is, as a new capacity for diversification, especially because it has allowed people to begin to differentiate between different kinds and uses of media [parce qu’il a permis de démassifier les médias]. Today, major newspapers, such as Le Monde and many others, allow all kinds of actors to convene around the newspaper, collecting together a very diverse range of views. This is an opportunity and it has given the sense of a kind of renaissance, following the development of an extreme consumerism within the mass media and the culture industries becoming an undifferentiated mass [qui avait été une sorte de laminoir] – especially in the last twenty years. Media such as television deteriorated terribly. All of which was related to an economic and industrial model that is now in decline.

Nevertheless, the web initially appeared as an opportunity for negentropy, that is to say of diversification. However, what we’ve discovered in recent years, in particular since we began to speak of the ‘Big Four‘, is the extraordinary hegemony of global giants who have gained an unprecedented prescriptive power over behaviour.

Only the United States takes full advantage of this economy of data. The data economy, which is destroying European national taxation and increasingly deprives public authorities of their capabilities to act, is based on a generalized calculability which is at odds with the negentropic promise of the web. This calculability tends towards the reintroduction of the law of the audience ratings. The page ranking performed by Google’s algorithm is a very specific, surgical form of audience rating, which is very efficient and very refined, but like television audience rating leads to the transformation of singularities into peculiarities, i.e. computable items, because it homogenises and de-singularises. Unlike the individual, the singular is incalculable. In this case, we have a new entropy process that we feel threatens languages. This is how I interpret Frederic Kaplan’s analysis of what he calls “linguistic capitalism.”

However, this predicament is not legislated, and it could be entirely different: the web in the service of an intensification of “individuation”, in other words singularities, is the future—and it is the future of Europe. Given the present circumstances, the web would totally horizontalise and level out information, because it requires processes of computation based on the removal of any lack of transparency.

As I said earlier, the web and the internet are publishing techonologies. Publishing technologies are the origin of what Plato calls πολιτεία (Politeia, Ed.) that we translate as res publica in Latin. Why do we translate it as res publica? Because Politeia constitutes a “public good”, as we discussed earlier. And the establishment of public goods requires publication technologies, which for the Greeks was writing. Marcel Detienne has shown that the Athenian city is like a vast typewriter that has wrote into the marbel of the walls of the city. After each decision of the Βουλή (boulè [parliament], Ed.) the publishers of the law wield the hammer and chisel to carve the decision in stone. This process of publication created the public right of citizens to criticise the law. One enters the politeia, citizenship, and [so] paves the way to democracy. Today, with digital computation, a whole new publication system is in place.

However, we always claim that the state of political rights, and the accompanying reason, is what the Greeks called λόγος (logos, Ed.). However, this assumes that such processes of publication enables disagreement, the publication of contradictory arguments, which we call public debate, and which is a fundamental rule of all rational knowledge. The promise of the web was to revive public, political, scientific or aesthetic debate. But this promise has not been kept. From the moment Google, Amazon and such companies had to make a profit from all of this, they became totally interested in equalising and levelling out data in order to exploit it with algorithms, crushing disagreement rather than enabling traceability and widespread intelligibility. I do not think this situation can last. The basis of knowledge, in all its forms — I’m not just talking about theoretical knowledge, its also true of ‘know how’ [savoir-faire] and life skills [savoir-vivre] – is grounded in a fundamental diversification, which when [knowledge] ceases, leaves behind dead knowledge – like dead languages and towns transformed into museums [villes muséifiées]. If the sciences and knowledge are founded on publication processes, the development of the digital is a radical transformation of knowledge, and in particular of academic knowledge. The power of Europe and the West is founded on power over knowledge. The emperor Frederic Barberousse, who, in opposition to the Pope, granted freedom to the University of Bologna in the 13th Century, initiated a process followed by Oxford, the Sorbonne and Cambridge. It is not the Conquistadors and the caravels that are the primary origin of the centuries-long global domination of the West: it is the reliance upon universities. This clearly evolves in new directions with the appearance of new devices for printing – and the cognitive as well as spiritual revolution brought about by Luther is clearly a consequence of such publishing technologies, by which Luther makes reading the scriptures for oneself the heart of his struggle. Along with the Counter Reformation, this leads to the foundation of the Jesuit Schools and the Jesuits evangelising around the world through their missions, which constitutes a fundamental aspect of the Enlightenment project, and this, with Condorcet[1] and the French Revolution, leads to Jules Ferry[2] via Guizot[3].

Obviously, the web, and the digital more generally, totally reconfigures these maps from top to bottom – not only the maps for teaching but the conditions of scientific research and the life of the mind in all its forms. Europe should not fear this, even less so since it is the origin of the concepts of the web and HTML, in which, in France, CNET [The National Centre for Telecommunication Studies] (which has been destroyed by irresponsible policies) played an important role in the design of the ATM and GSM networks. Europe has played an extremely important role in the configuration of all technical systems, but has failed to make this common knowledge [? elle n’a pas su le socialiser] because European political and economic actors are often blind to such issues. Thus, when researchers and scientists are daring and inventive they have found themselves confined to an imitation of the ‘American model’, which is a disaster for a Europe that is totally devoid of an industrial strategy, and condemned to a ‘downgrading’, at the height of the challenges of our times.

What we call the Enlightenment emerged in Europe, and it was produced by the republic of letters resulting from the printing press. We are no longer in the era of pure Enlightenment: we have entered an era of Shadow and Enlightenment [the word “Lumières” here is used as a play on words between light and enlightenment]. It is an era of a pharmacological consciousness of what pushes the speed of technology, the speed of light, also causes the shadows of the ‘toxicity’ of the digital, which necessarily accompany their own ‘cure’ [sa «curativité»]. A new industrial politics should be supported by Europe and must be based on a curative politics and economics of the digital, deliberatively and rationally battling against its toxicity. After Edward Snowden’s revelations [discussed in more detail in ‘the Net Blues‘] every citizen is aware of this huge problem that puts the future gravely at risk [un immense problème qui hypothèque très gravement l’avenir].

Europe should unite around a project for a new Enlightenment that is at once scientific, philosophical, industrial, and economic; which fully seizes the immense challenges brought by computation [la numérique]. Such a politics should be based upon an unprecedented use of universities and research organisations. The very nature of knowledge is destabilised by the digital. For example, to work in the nanosciences today means working with digital artifacts to produce nanoscale phenomena, that is to say, at the quantum level. These are not actually digital phenomena, i.e. objects of intuition, but what Kant called the “noumena”, that is to say the objects of understanding and reason. However, these are objects that are, at the same time, completely constituted in nanophysics, and are the objects of scientific experiments, by being simulated and mathematically modelled using computers. Genetic biology is made possible today by biostations, that is: by informational calculations made on very large amounts of data. The digital alters the practices of mathematicians, but Frederic Kaplan has shown it also modifies the development of languages. [Furthermore] geography has become fundamentally linked to geographical information systems, as the GPS standard has been socialised within our everyday lives. The structures of digitisation are transforming all knowledge, including know-how [les savoir faires] and life skills [les savoir vivre].

Faced with such a universal upheaval it is essential to reconfigure all academic research and to organise new links between universities and school curriculums [les pratiques scolaires] so that the digital enters schools on a rational basis, not through the stories told [storytelling] by economic actors advocating a legitimation of their own models [of the digital]. This model [of working] should be analysed, critiqued and continuously improved, for that is [the practice of] reason. Taking such a critique to the global level, Europe could reconstruct a digital industry which is currently tragically lacking. It is not only children but also parents, and elected officials, that need to be acculturated [to such changes] through schools, so that European society can be deeply reconfigured, and take with it a new model of the web.

Notes

1. In the 18th century, the Marquis de Condorcet developed a voting tally system to select the candidate who would beat each of the other candidates in a run-off election.

2. Jules Ferry was a 19th century republican who promoted laicism and French colonial expansion.

3. François Guizot was a prominent 19th century statesman who significantly promoted education.

Algorithmic Practices: Emergent interoperability in the everyday

This year the RGS-IBG Annual Conference will be coming to the University of Exeter, my institution, and there are some exciting sessions already in the pipelines. I wanted to bring one to the attention of those that happen to look at this website:

Algorithmic Practices: Emergent interoperability in the everyday

Sponsored by: the History and Philosophy of Geography Study Group
Convened by: Eric Laurier and Chris Speed (Edinburgh) and Monika Buscher (Lancaster)

An ever-increasing proportion of the interactions that we have with digital platforms, apps and devices are mediated according to complex algorithms. Whether it be the real time analytics that draw us into playing a game on our phone, or tailored recommendations built from our historical searching and buying habits, we structure our daily lives in response to ‘performative infrastructures’ (Thrift, 2005: 224), most of them hidden deliberately by their makers.

Yet, in responding to the summons, the predictions, the recommendations, the help, the calculations that occur as platforms try to anticipate our next actions, we are learning how they work and don’t work. In our ad hoc assemblies of devices, apps and screens we short cut and re-make algorithms. For instance, in disaster response, ad hoc interoperability and agile response are creating incentives for ‘systems of systems’ that allow locally accomplished convergence of diverse information systems, with implications for data surge capacity as well as protection and privacy (Mendonça et al 2007).

Described as “a structured approach to real-time mixing and matching of diverse ICTs to support individuals and organizations in undertaking response”, emergent interoperability maybe becoming common place in less dramatic daily practices as individuals negotiate the range of algorithms that “react and reorganize themselves around the users” (Beer 2009).

This panel invites papers and presentations that provide insight into conditions and settings in which emergent operability and interoperability occurs within society.

Areas of interest:

  • Dashboards, decongestion, security and the other promises of smart cities (Kitchin 2014
  • Wayfaring, wayfinding and other mobilities with map apps (Brighenti 2012)
  • Changing forms of participation and collaboration through social media algorithms
  • Responding to and attempting to manipulate predicted actions and recommendations
  • The production of calculated publics (Gillespie 2014)
  • Political contestation of algorithms
  • Ad hoc systems
  • Textures of communication in digital and traditional media

The deadline is Friday 6th February 2015.

Email speakers, title and abstract to eric [dot] laurier [at] ed.ac.uk

References:

Beer, D. (2009) Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society. London: SAGE. Vol 11(6): 985–1002.

Brighenti, A. M. (2012). New Media and Urban Motilities: A Territoriologic Point of View. Urban Studies, 49(2), 399–414.

Gillespie, T (2014) The Relevance of Algorithms, pp167-194 in Media Technologies, Essays on Communication, Materiality and Society ed. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Cambridge, MA: MIT Press.

Kitchin, R. (2014). Thinking critically about and researching algorithms. The Programmable City Working Paper 5, Maynooth

Mendonça, D., Jefferson, T., & Harrald, J. (2007). Emergent Interoperability: Collaborative Adhocracies and Mix and Match Technologies in Emergency Management. Communications of the ACM, 50(3), 44-9.

Thrift, N. (2005) Knowing Capitalism. London: Sage.

Cultural Geographies article published – Memory Programmes

My article in Cultural Geographies has been published as part of the first issue of 2015. You can find it amongst a really interesting set of papers including two theme sections on habit and on technology, memory and collective knowing. Other articles in the issue include those by J-D Dewsbury and David Bissell, Andrew Lapworth, my colleague Jen Lea, Maria Hynes & Scott Sharpe, and Matthew Wilson.

A reminder about my article, Memory Programmes:

The aim of the article is to interrogate some key elements of how software has become a means of ‘industrialising’ memory, following Bernard Stiegler. This industrialisation of memory involves conserving and transmitting extraordinary amounts of data. Data that is both volunteered and captured in everyday life, and operationalised in large-scale systems. Such systems constitute novel sociotechnical collectives which have begun to condition how we perform our lives such that they can be recorded and retained.

To investigate the programmatic nature of our mediatised collective memory the article has three parts. The first substantive section looks at a number of technologies as means of capturing, operating upon and retaining our everyday activities in ‘industrial’ scale systems of memory. Particular attention is paid to the quasi-autonomous agency of these systems, that appear to operate at a scale and speed that exceeds a human capacity of oversight.

In the second section I look at the mnemonic capabilities of networked technologies of digital mediation as ‘mnemotechnologies’. Following Stiegler, these are technologies and technical supports that both support and reterritorialise what we collectively understand about our everyday lives.

The conclusion of the article addresses the ways in which an ‘industrialisation of memory’ both challenges and transforms the ways in which we negotiate collective life.

A political economy of Twitter data – revised and published on LSE Impact blog

Last year I wrote a blog post for the Contagion project website, building from the experience of attempting to do research with Twitter data as relative novices. Putting the pragmatic techniques of doing such to one side, it became striking that doing this kind of research with Twitter’s apparatus is neither easy, nor, when one delves a bit deeper, is it ‘free’.

The post was been picked up by the LSE Impact blog, who asked to re-blog it, which was very nice of them. So, you can find a slightly updated (numbers, sources and bit more nuance in the argument) version of the blog post there.

The question I end up posing is: “Should researchers be using data sources (however potentially interesting/valuable) that restrict the capability of reproducing our research results?” This is not easily answered, not least when so many ‘non-academic’ researchers are merrily plugging away producing social scientific research, increasingly consumed by the general public, which is gaining influence, and which, perhaps, could benefit from some critical engagement…

Please do read the post and get in touch if you’d like to discuss this, and any of our research, further.

A political economy of twitter data?

Over on our Contagion project website, I have written a blog post concerning some issues around whether or not and how we can or should access and use Twitter data in research.

Here’s the beginning:

Many of the research articles and blogs concerning conducting research with social media data, and in particular with Twitter data, offer overviews of their methods for harvesting data through an API. An Application Programming Interface is a set of software components that allow third parties to connect to a given application or system and utilise its capacities using their own code. Most of these research accounts tend to make this process seem rather straight forward. Researchers can either write a programme themselves, such as, or can utilise one of several tools that have emerged that provide a WYSIWYG interface for undertaking the connection to the social networking platform, such as implementing yourTwapperKeeperCOSMOS or using a service such as ScraperWiki (to which I will return). However, what is little commented upon is the restrictions put on access to data through many of the social networking platform APIs, in particular Twitter. The aim of this blog post is to address some of the issues around access to data and what we are permitted to do with it.

Read more >.

Observation from Iraq: The Nokia IED trigger

A friend of mine is a journalist for France 24 and is currently in Iraq covering the conflict with IS. His Facebook posts are extraordinary, very powerful, and I wanted to share one of them. So, courtesy of my friend in Iraq (and with his permission), here is a post (verbatim) concerning the use of Nokia 113 phones as IED detonation triggers:

 

These phones and wires are the remote control detonation devices left behind by Islamic State forces.

The initial activation has a 6 minute timer, after which the fuse becomes “live”.

Then a simple phone call is enough to detonate the explosive charge.

The IS choose this model of Nokia because its battery lasts 10 days. That’s basically a slow fuse that lasts a week and a half.

It’s not failsafe. One phone had 8 missed calls – 8 failed detonation attempts. The Peshmerga commander who found that one says the only jamming device he has is protection from above.

Improvised Explosive Devices are one of the biggest threats the peshmerga face, as the jihadists concede territory.

The fear of bombs has slowed their progress into hostile territory. The Islamists melt away like wraiths, leaving their IEDS to strike blindly at the Peshmerga.

Your email address is worth 100 tea lights…

Ikea Family mailshot: 100 tea lights for your email address

How much is personal data worth? How might that be calculated? These are hard questions. Nevertheless, personal data does attract particular kinds of value, often determined by those harvesting and selling it on, such as Facebook, Google and so on.

These are rarely visible calculations, probably necessarily so – data is sometimes only worth something when it is part of a larger set. We are both an individual and part of one or more populations (which can be variously defined and derived in data). When you make up a ‘segment’ of a given population, your data, and what can thereby be inferred, has a particular value.

For example, the email marketing company Topspin apparently calculate the pecuniary value of a person’s email address to a band harvesting it based upon the amount of money that person (fan) will, on average, go on to spend on their music.

In a recent piece for the New York Times website, Rebecca Lieb, a digital advertising and media analyst at the Altimeter Group, offers an example:

“Facebook has deep, deep data on its users. You can slice and dice markets, like women 25 to 35 who live in the Southeast and are fans of ‘Breaking Bad,’ … The new Atlas platform, she said, “can track people across devices, weave together online and offline.”

The data collected by various market research companies carries a notional value that is sometimes rendered visible in stark ways. Another example illustrates this: In a 2011 article for Huffington Post the price list for marketing data company Rapleaf shows for how much particular pieces of data were sold.  Age and gender are given away free, whereas ‘Likely smart phone user’ is $0.03.

We are thus worth something insofar as we are represented (accurately or otherwise) as a collection of data in one or more databases. We individually value our own data variously, depending on its use, context and so on. An email address, then, may be valuable if it is private, or not valuable if it is used for any old competition entry – this is often contextual and that can, of course, be missed by the marketing data companies (perhaps this is a good thing).

Nevertheless, we, or, more accurately, the data that are used to variously represent us, are commodified – our data-selves are products. Any sense of a unified ‘self’ is possibly too neat, our physically bodied selves rendered endlessly divisible and reducible to data representations, what Deleuze referred to as ‘dividuals’ in his excellent ‘Postscript on Societies of Control‘. There is a real sense in which, thought in this way, we are variously rendered separate from and without control of our representations in data.

The Ikea flyer, pictured above, soliciting an email address for a notional inducement of 100 tea lights made me think through the above and back to the excellent Paying Attention conference of 2010, concerning the idea of attention economies. At the conference, and in other contexts, the redoubtable Tim Kindberg formulated his ‘Facebook data provocations‘ – encouraging others to speculate about what would happen if we (the users) charged Facebook for our data. Thinking back over the academic moment of worry over privacy it seems necessary to continue to think about these things. While the horse has bolted and the stable door has long since been the portal to an upmarket barn conversion, I argue that these  questions remain pertinent.

Paper accepted – Memory programmes: the retention of collective life

I am pleased to share that I have recently had a paper accepted for Cultural Geographies, which will form part of a theme section/issue co-edited by Sarah Elwood and Katharyne Mitchell concerning “Technology, memory and collective knowing”—stemming from a session at the 2013 AAG in Los Angeles.

The paper is entitled ‘Memory programmes: the retention of collective life’ and builds upon a theoretical conference paper I gave at the Conditions of Mediation conference in 2013.

The aim of the article is to interrogate some key elements of how software has become a means of ‘industrialising’ memory, following Bernard Stiegler. This industrialisation of memory involves conserving and transmitting extraordinary amounts of data. Data that is both volunteered and captured in everyday life, and operationalised in large-scale systems. Such systems constitute novel sociotechnical collectives which have begun to condition how we perform our lives such that they can be recorded and retained.

To investigate the programmatic nature of our mediatised collective memory the article has three parts. The first substantive section looks at a number of technologies as means of capturing, operating upon and retaining our everyday activities in ‘industrial’ scale systems of memory. Particular attention is paid to the quasi-autonomous agency of these systems, that appear to operate at a scale and speed that exceeds a human capacity of oversight.

In the second section I look at the mnemonic capabilities of networked technologies of digital mediation as ‘mnemotechnologies’. Following Stiegler, these are technologies and technical supports that both support and reterritorialise what we collectively understand about our everyday lives.

The conclusion of the article addresses the ways in which an ‘industrialisation of memory’ both challenges and transforms the ways in which we negotiate collective life.

I have copied below the abstract and I’d be happy to share pre-publication copies, please contact me via email.

Abstract:

This article argues that, in software, we have created quasi-autonomous systems of memory that influence how we think about and experience life as such. The role of mediated memory in collective life is addressed as a geographical concern through the lens of ‘programmes’. Programming can mean ordering, and thus making discrete; and scheduling, making actions routine. This article addresses how programming mediates the experience of memory via networked technologies. Materially recording knowledge, even as electronic data, renders thought mentally and spatially discrete and demands systems to order it. Recorded knowledge also enables the ordering of temporal experience both as forms of history, thus the sharing of culture, and as the means of planning for futures. We increasingly retain information about ourselves and others using digital media. We volunteer further information recorded by electronic service providers, search engines and social media. Many aspects of our collective lives are now gathered in cities (via CCTV, cellphone networks and so on) and retained in databases, constituting a growing system of memory of parts of life otherwise forgotten or unthought. Using examples, this article argues that, in software, we have created industrialised systems of memory that influence how we think about living together.

Keywords: memory, technology, mnemotechnics, industrialisation, programming, Stiegler

Rob Kitchin on ‘digital geography’

Based on his notes as discussant in the recent ‘co-production of digital geographies‘ sessions at the RGS-IBG conference (2014), Rob Kitchin has written a helpful and concise blog post reflecting on what we might mean when we talk about ‘digital geographies’. This sort of relates to some of what I’ve written about the kinds of spatial imaginaries of a ‘virtual‘ employed within and beyond geography when discussing digital mediation. So, Rob Kitchin’s post is definitely worth a read and I’ve reproduced the body of it here:

Last Friday I acted as a discussant for three sessions (no. 1no. 2no. 3) on Digital Geography presented at the RGS/IBG conference in London.  The papers were quite diverse and some of the discussion in the sessions centred on how to frame and make sense of digital geographies.

In their overview paper, Elisabeth Roberts and David Beel categorised the post-2000 geographical literature which engages with the digital into six classes: conceptualisation, unevenness, governance, economy, performativity, and the everyday.  To my mind, this is quite a haphazard way of dividing up the literature.  Instead, I think it might be more productive to divide the wide range of studies which consider the relationship between the digital and geography into three bodies of work:

Geography of the digital

These works seek to apply geographical ideas and methodologies to make sense of the digital.  As such, it focuses on mapping out the geographies of digital technologies, their associated socio-technical assemblages and production.  Such work includes the mapping of cyberspace, charting the spatialities of social media, plotting the material geographies of ubiquitous computing, detailing the economic geographies of component resources, technologies and infrastructures, tracing the generation and flows of big data, and so on.

Geography produced by the digital

This body of work focuses on how digital technologies and infrastructures are transforming the geographies of everyday life and the production of space.  Such work includes examining how digital technologies and ICTs are increasingly being embedded into different spatial domains – the workplace, home, transport systems, the street, shops, etc.; how they mediate and augment socio-spatial practices and relations such as producing, consuming, communicating, playing, etc; how they shape and remediate geographical imaginaries and how spaces are visioned, planned and built; and so on.

Geography produced through the digital

An increasing amount of geographical scholarship, praxis and communication is now undertaken using digital technologies.  For example, generating, recording and analyzing data using digital devices and associated software and databases; the collection and sharing of datasets and outputs through digital archives and repositories; discussing ideas and conducting debate via mailing lists and social media; writing papers and presentations, producing maps and other visualizations using computers; etc.  A fairly substantial body of work thus reflects on the difference digital technologies make to the production of geographical scholarship.

Taken together these three bodies of work, I would argue, constitute digital geography.

At the same time, however, I wonder about the utility of bounding digital geography and corralling studies within its bounds.  To what extent is it useful to delimit it as a defined field of research?

It might be more productive to reframe much of what is being claimed as digital geography with respect to its substantive focus.  For example, examining the ways in which digital technologies are being pervasively embedded into the fabric of cities and how they modulate the production of urban socio-spatial relations is perhaps best framed within urban geography.  Similarly, a study looking at the use of digital technology in the delivery of aid in parts of the Global South is perhaps best understood as being centrally concerned with development geography.  In other words, it may well be more profitable to think about how the digital reshapes many geographies, rather than to cast all of those geographies as digital geography.

Nonetheless, it is clear that geographers still have much work to do with respect to thinking about the digital.  That is a central task of my own research agenda as I work on the Programmable City project.  I’d be interested in your own thoughts as to how you conceive and position digital geography, so if you’re inclined to share your views please leave a comment.