Category Archives: pervasive media

“The dictatorship of data” (on BBC R4)

Just caught up with a programme aired on BBC Radio 4 last week called “The Dictatorship of Data“, presented by their Security Correspondent Gordon Corera. The role of the presenter certainly inflects the tone of the programme. It focuses on the growth in the collection of data, as the wholesale capture of data exhausts and meta-data from our devices and public platforms, and thus how that collection and then aggregation both allows and then presents problems for forms of surveillance.

It is an interesting programme insofar as it offers a general introduction to several key issues. The discussion of the geopolitical responses to the uses of social media platforms and how Russia in particular wants to capture some of that capability (particularly in relation to SORM) is good, and it mostly draws on the authors of a book that sounds good: “Red Web“. Likewise, there’s some entertaining and perhaps disquieting discussion of ‘The Hacking Team‘, purveyors of malware to governments. Again, this is understandably figured in geopolitical terms.

However, I’d say it is slightly wide of the mark in terms of the discussion offered of the prospects of social media enabling some kind of authoritarianism. The way it is discussed takes as it’s assumption that people are faithfully reporting their actual opinions, ‘real’ events and so on and that they are individuals (and not bots) – as though social media are some kind of unproblematic ‘social sensing platform’. Now, some will argue that there is a way to somehow ‘solve’ the ‘biasing’ of the sample represented by a given social media platform’s population and I’m no statistician so… meh. I remain skeptical that any kinds of claim about ‘representivity‘ are particularly meaningful.

I think those who want a more nuanced viewpoint on some of these issues probably ought to checkout Louise Amoore‘s The Politics of Possibility and her papers following on from this, likewise it’s worth checking out both David Murakami Wood and Francisco Klauser‘s work on surveillance too (of course, there’s more – but you have access to a search engine 😉 ).

CFP> Streams of Consciousness: Data, Cognition and Intelligent Devices, Apr 2016

This looks interesting:

Streams of Consciousness

Data, Cognition and Intelligent Devices

21st and 22nd of April 2016

Call for Papers


“What’s on your mind?” This is the question to which every Facebook now responds. Millions of users sharing their thoughts in one giant performance of what Clay Shirky once called “cognitive surplus”. Contemporary media platforms aren’t simply a stage for this cognitive performance. They are more like directors, staging scenes, tweaking scripts, working to get the best or fully “optimized” performance. As Katherine Hayles has pointed out, media theory has long taken for granted that we think “through, with and alongside media”. Pen and paper, the abacus, and modern calculators are obvious cases in point, but the list quickly expands and with it longstanding conceptions of the Cartesian mind dissolve away. Within the cognitive sciences, cognition is now routinely described as embodied, extended, and distributed. They too recognize that cognition takes place beyond the brain, in between people, between people and things, and combinations thereof. The varieties of specifically human thought, from decision-making to reasoning and interpretation, are now considered one part of a broader cognitive spectrum shared with other animals, systems, and intelligent devices.

Today, the technology we mostly think through, with and alongside are computers. We routinely rely on intelligent devices for any number of operations, but this is no straightforward “augmentation”. Our cognitive capacities are equally instrumentalized, plugged into larger cognitive operations from which we have little autonomy. Our cognitive weaknesses are exploited and manipulated by techniques drawn from behavioural economics and psychology. If Vannevar Bush once pondered how we would think in the future, he received a partial response in Steve Krug’s best selling book on web usability: Don’t Make Me Think! Streams of Consciousness aims to explore cognition, broadly conceived, in an age of intelligent devices. We aim to critically interrogate our contemporary infatuation with specific cognitive qualities – such as “smartness” and “intelligence” – while seeking to genuinely understand the specific forms of cognition that are privileged in our current technological milieu. We are especially interested in devices that mediate access to otherwise imperceptible forms of data (too big, too fast), so it can be acted upon in routine or novel ways.

Topics of the conference include but are not limited to:

  • data and cognition
  • decision-making technologies
  • algorithms, AI and machine learning
  • visualization, perception
  • sense and sensation
  • business intelligence and data exploration
  • signal intelligence and drones
  • smart and dumb things
  • choice and decision architecture
  • behavioural economics and design
  • technologies of nudging
  • interfaces
  • bodies, data, and (wearable) devices
  • optimization
  • web and data analytics (including A/B and multivariate testing)

Please submit individual abstracts of no longer than 300 words. Panel proposals are also welcome and should also be 300 words. Panel proposals should also include indvidual abstracts. The deadline for submissions is Friday the 18th of December and submissions should be made to Accepted submissions will be notified by 20th of January 2016.
Streams of Consciousness is organised by Nathaniel Tkacz and Ana Gross. The event is supported by the Economic and Social Research Council.

Reblog> Does Mean Open Access Is Becoming Irrelevant?

A really interesting post by Gary Hall on his blog around what ‘open access’ means and how we negotiate what might be understood as the ‘attention economy’ of academia in relation to the ways in which sites like and research gate leverage the ‘respectability’ of our work and our collective need to find audiences in order to generate valuable metadata. As Hall argues:

 In this world who gate-keeps access to (and so can extract maximum value from) content is less important, because that access is already free, than who gate-keeps(and so can extract maximum value from) the data generated around the use of that content, which is used more because access to it is free.

I heartily recommend reading the whole piece

Does Mean Open Access Is Becoming Irrelevant?

brief discussion took place this month on the Association of Internet Researchers air-l listserve concerning a new book from the publishers Edward Elgar: Handbook of Digital Politics. Edited by Stephen Coleman and Deen Freelon, this 512 page volume features contributions from Peter Dahlgren, Nick Couldry, Christian Fuchs, Fadi Hirzalla and Liesbet van Zoonen, among numerous others. The discussion was provoked, however, not by something one of its many contributors had written about digital politics, but by the book’s cost: $240 on Amazon in the US. (In the UK the hardback is £150.00 on Amazon. Handbook of Digital Politics is also available online direct from the publishers for £135.00, with the ebook available for £40.) As one of those on the list commented, ‘I’d love to buy it, but not at that price’ – to which another participant in the discussion responded: ‘I encourage everyone to use the preprint option to post their piece on and, perhaps others have other open access suggestions (e.g. Institutional Repositories of individual universities)’. Now, to be fair, the idea that is implied by this suggestion – that the platform for sharing research represents just another form of open access – is a common one. Yet posting on is far from being ethically and politically equivalent to using an institutional open access repository.

Read the whole post here.

Critical reflection on the ‘sharing economy'(?)

Sharing is caring“, The Circle — David Eggers.

“the question of work time outside employment is posed with renewed vigour, having been totally ignored by the law reducing the working week to thirty-five hours, just as it ignored the exhaustion  of the consumerists industrial model, a model within which production and consumption constitute a functional opposition, but one that has now become obsolete”, For a New Critique of Political Economy — Bernard Stiegler.

I have had a bunch of tabs open in my browser with they intention of writing something about the ‘sharing economy’ and how one might begin to ask questions of the kinds of words we use to variously describe reconfigurations of labour/work in relation to peer-to-peer, precarious work, casualisation and (perhaps) the slow dissolution of labour movements but (as one can easily guess) I just don’t have the time to make something coherent about this… so here’s some notes, off the top of my head, with some pointers to things that may be worth reading…

In the Grauniad (as it struggles to contend with a Labour party that is significantly to the left of it) there was a piece by Alex Hern arguing that the term the ‘sharing economy’ should “die”. His argument is that what the ‘sharing economy’ supposedly denotes is no kind of sharing but rather the continuation of unequal labour relations between those with wealth and those in need of work – an example is TaskRabbit: a system to hire temporary labour, such as caterers, cleaners or someone to stand in the queue for the latest iPhone (probably without needing to ensure competitive remuneration, because people who are willing to stand in a queue for you are probably in a precarious position).

I have some sympathies with the argument  – it’s a reworking of the ‘precariat’ argument, best expressed by Guy Standing, in which the rejigging of the economy has created a new class of worker that is reliant upon ‘precarious’ (temporary, difficult and unpredictable) forms of work. However, reducing it to worrying over terminology seems to miss a broader point: regardless of what you call it, any attempt at novel economic activity will attract those who seek to be exploitative.

Before going on to think about how to address this state of affairs, it is worth noting that some have tried to think about how/why we are in such a situation. Izabella Kominska (on the FT website) offers a good overview of some of the ways in which economists have thought about post-Fordism ~ the kinds of automation that are (and are not) happening in manufacturing, the movements of labour (offshoring etc.) and why we don’t have the amount of leisure time and levels of productivity promised by greater automation.

Indeed, in order to extract more profit in an economy of apparently ever-increasing abundance many argue that it is only through business/industry clawing back a more exploitative relation with the labour force that they can continue to extract the levels of profit to which we have all become accustomed. We can look to the likes of the Italian post-Fordists, such as BerardiLazzaratoTerranova and Virno; to net critics such as Morozov; and to Bernard Stiegler for articulations of these latest forms of proletarianisation [you don’t have to agree, I’m just pointing out this is an argument that contextualises the ‘sharing economy’]. It is in this context of a decline in the amount of work – a decline of ‘careers’ and a growth of ‘jobs’, that Melissa Gregg articulates the need to rethink our words for labour and to critically think about what might underlie the push for a ‘sharing’ economy.

It is in this context that one might formulate a critique of the ‘sharing economy’ –– the talismans of this novel form of economic practice, the likes of Uber, TaskRabbit and Airbnb, all extract value out of people either rendering their traditional working capacities more ‘flexible’ (or precarious) or by those people seeking to monetise other parts of their lives, e.g. where they live, the stuff they own, or their ‘leisure time’.

The proponents of these kinds of work argue that this offers ‘flexibility’ – work when you want, how you want etc. but one might counter this with the argument that as flexible labour you have to be opportunistic and so you are precisely not working when you want but rather when there is a demand for your work.

Likewise, many of the kinds of ‘work’ that are offered through the ‘sharing economy’ platforms necessarily constitute unequal power relations. The two principal actors in the contract of ‘sharing’ work are not equal: the ‘sharing’ systems rely on creating competition, and thus a ‘scarcity’ of work such that the ‘customer’ has choice and the ‘worker’ doesn’t. The third actor, the platforms themselves, are also seeking to extract value out of the ‘sharing’ of labour by acting as the mediator, which means the system itself is always geared to the creation of a margin.

Seen as a precarious form of work, the ‘sharing economy’ has been labelled otherwise as the ‘gig economy’ and there’s been some interesting discussions about what such work means to us in terms of our mental and physical health. For example, these two pieces in the FT:

The silent anxiety of the sharing economy
New ‘gig’ economy spells end to lifetime careers

Both of which have lots of links to follow up.

The other aspect of an emerging critique of the ‘sharing economy’ is precisely the ‘platform‘ nature of the kinds of systems that are seeking both to further and to profit from these apparently new forms of work. As sebastian olma suggests in a piece for the Institute of Network Cultures:

These are digital platforms that roughly do two things: either making the old practice of re- and multi-using durable goods more efficient or expanding market exchange into economically uncharted territory of society.

Olma argues (as do other) that what these platforms do is render available to the market things that have not been previously…

They stand for a digitally enabled expansion of the market economy, which…is the opposite of sharing.

This is what Sascha Lobbo (amongst others – Gary Hall is good on this) has argued constitutes not a ‘sharing economy’ but a ‘platform capitalism’. Rather than marketplaces, platforms are a kind of generic connective infrastructure, what Olma calls an ‘ecosystem’, that connects customers and companies to anything, not just specific goods or services. He argues that

While it is absolutely true that internet marketplaces and digital platforms can reduce transaction costs, the claim that they cut out the middleman is pure fantasy.

the old ‘middlemen’ [sic.] are replaced by more powerful gatekeepers: “monopolies with an unprecedented control over the markets they themselves create”, through the quasi-autonomous systems (what get popularly referred to as ‘algorithms’) that facilitate such things as Uber’s “surge pricing“. In this way every transaction becomes an auction, which is tipped in the favour of the platform, and the worker is rendered always to some extent precarious.

Indeed, the reality of working in such systems is not only possibly very stressful, as argued in the FT piece linked a bit earlier, but also doesn’t even necessarily offer the positive outcomes that the proponents claim. As Sarah Kessler, a Fast Company journalist, noted in her extensive report of her attempt to become one of the ‘sharing economy’ workforce:

For one month, I became the “micro-entrepreneur” touted by companies like TaskRabbit, Postmates, and Airbnb. Instead of the labor revolution I had been promised, all I found was hard work, low pay, and a system that puts workers at a disadvantage.

This critique presents some interesting challenges to those who espouse alternative modes of working and performing economic activities, such as the P2PFoundation, and Stiegler’s push for an ‘economy of contribution‘ through Ars Industrialis. However, the ‘platform capitalism’ of Uber et al. is not the only way to run such a system.

Rather than resort to a gatekeeper model we might alternatively look to the (supposedly) radical transparency of the blockchain — in this way I’m left with some (probably quite muddled) questions:

  • What kind of economy/ economics is performed when the transactional infrastructure is decentralised?
    • Can you actually do without an intermediary (‘middleman’ [sic])?
    • Does a blockchain infrastructure facilitate enough of a commons to make a ‘no transaction cost’ economy possible?
  • Can we reduce ‘sharing’ to an issue of the negotiation of trust (not to be exploited), solvable by the blockchain?

There must be more/better questions but my brain is fried… I hope that this is at least useful for me to return to as a set of loose notes and perhaps even useful for others vaguely interested in such things. Likewise, as usual, if you’re better informed and want to pitch in – please do leave comments :)

The Human Face of Cryptoeconomies – @furtherfield exhibition

I just wanted to flag this excellent exhibition about to start at Furtherfield (in London). It involves the Museum of Contemporary Commodities alongside a lot of other great work and will very much be worth visiting.

The Human Face of Cryptoeconomies


Featuring Émilie Brout and Maxime Marion, Shu Lea Cheang, Sarah T Gold, Jennifer Lyn Morone, Rob Myers, The Museum of Contemporary Commodities (MoCC), the London School of Financial Arts and the Robin Hood Cooperative.

Furtherfield launches its Art Data Money programme with The Human Face of Cryptoeconomies at Furtherfield Gallery in the heart of London’s Finsbury Park.

The Human Face of Cryptoeconomies presents artworks that reveal how we might produce, exchange and value things differently in the age of the blockchain.

Appealing to our curiosity, emotion and irrationality, international artists seize emerging technologies, mass behaviours and p2p concepts to create artworks that reveal ideas for a radically transformed artistic, economic and social future.

Visit the Furtherfield website for more information.

Reblog> New paper: Locative media and data-driven computing experiments @syperng & @robkitchin

A really interesting paper by Sung-Yueh Perng, Rob Kitchin and Leighton Evans, definitely worth a read.

New paper: Locative media and data-driven computing experiments

Sung-Yueh Perng, Rob Kitchin and Leighton Evans have published a new paper entitled ‘Locative media and data-driven computing experiments‘ available as Programmable City Working Paper 16 on SSRN.


Over the past two decades urban social life has undergone a rapid and pervasive geocoding, becoming mediated, augmented and anticipated by location-sensitive technologies and services that generate and utilise big, personal, locative data. The production of these data has prompted the development of exploratory data-driven computing experiments that seek to find ways to extract value and insight from them. These projects often start from the data, rather than from a question or theory, and try to imagine and identify their potential utility. In this paper, we explore the desires and mechanics of data-driven computing experiments. We demonstrate how both locative media data and computing experiments are ‘staged’ to create new values and computing techniques, which in turn are used to try and derive possible futures that are ridden with unintended consequences. We argue that using computing experiments to imagine potential urban futures produces effects that often have little to do with creating new urban practices. Instead, these experiments promote big data science and the prospect that data produced for one purpose can be recast for another, and act as alternative mechanisms of envisioning urban futures.

Keywords: Data analytics, computing experiments, locative media, location-based social network (LBSN), staging, urban future, critical data studies

The paper is available for download here.

Character assassinations? There’s an app for that…

Interesting article on the Washington Post  site, tweeted by David Murakami Wood (above), that talks about a service/app called “Peeple” that seeks to be a “Yelp for people” – to enable us to ‘rate’ and ‘review’ one another… A market-driven death-knell for treating one another like ‘people’ (rendering the name rather ironic) and another attempt to pull a further aspect of ‘ordinary’ life into the attention economy. This is an attempted renegotiation of the ‘normative’, what Daniel Miller and Sophie Woodward, in their book Blue Jeans, describe as the sense in which “the expectation that actions within a social field are likely to be judged as right or wrong, appropriate or inappropriate, proper or transgressive”. What if reviewing one another became ‘normal’..?(!!) *sigh*

Reblog> Here’s why visions of ubiquitous connectivity aren’t going to be realised any time soon

Interesting post from Mark Graham on his blog…

Here’s why visions of ubiquitous connectivity aren’t going to be realised any time soon

The last few months have seen a wealth of stories about visions to connect the world. Facebook, Google, large international organisations, states, and even Bono, dream of a world in the near future in which we are all hooked into the network.

In the midst of all of this, I found a comment made by Jimmy Wales particularly interesting.

This hope of the inevitability of ubiquitous connectivity is one that is widely reproduced by other policy makers, technologists, and thought leaders.

However, it is a hope that needs to be unpacked. There are two ways in which this hypothetical future in which everyone is connected could be brought into being.
The first one of these futures is a world where everyone can afford access. But as we demonstrated in our research about the global affordability of broadband, dropping prices is unlikely to be a sufficient strategy. There will remain billions of people making a subsistence living, for whom even extremely cheap access is unaffordable.  The average Mozambican worker, for instance, would need over one and a half year’s salary to pay for one year’s worth of broadband access.

In other words, Jimmy Wales’ prediction won’t happen simply by lowering prices.
The second future is the one promoted by the likes of Mark Zuckerberg. One in which large corporations sponsor free (and importantly, limited) access for billions of people in return for attention monopolies. This brings a very different sort of Internet into being: one in which winners and losers, centres and peripheries, are already pre-selected by those who control your access.

What this all ultimately means is that it seems unlikely that any sort of open Web will be ubiquitously available in the near future. Simply lowering the cost of access will continue to leave out the very poorest; and handing over the project of connecting the disconnected to large technology firms will leave us with a very different (and far less desirable) sort of Internet.

To read a bit more about visions of connectivity, here are two of my recent papers:

CFP> Mapping (from) the minor of big data?

Here’s an interesting call for papers for the Association of American Geographers conference next year (2016), posted to Crit-Geog by Wen Lin…

Call for Papers:

Mapping (from) the minor of big data?
AAG Annual Meeting, San Francisco
29 March to 2 April 2015

Wen Lin, Newcastle University
Matthew W. Wilson, University of Kentucky

“The minor is not a theory of the margins, but a different way of working with material. … It is about the conscious use of displacement.” (Katz 1996, 489)

“Clearly, the technology has the potential to disenfranchise the weak and not so powerful through the selective participation of groups and individuals.” (Harris and Weiner 1998, 69)

Recent years have seen the explosive growth of geospatial data produced and shared by vast, diverse users, facilitated by an array of information and communication technologies and mobile devices. There is a burgeoning body of work attempting to theorize and investigate these processes and practices, with notions including neogeography (Turner 2006), volunteered geographic information (VGI) (Goodchild 2007), maps 2.0 (Crampton 2009), vernacular mapping (Gerlach 2014), alt.gis (Schuurman 2015), and a form of geographic big data (Mooney 2015). Significant efforts have been made to examine a range of issues derived from such phenomena regarding ways of mapping, data quality, and associated socio-political implications concerning power, equity and knowledge. Attention has been given to the empowering and emancipatory potential of new ways of mapping and storytelling, while questions have also been raised about possible implications of surveillance, population control, and unevenness of knowledge production.

Yet, there remains much to be known about those mapping efforts that are seemingly on and in the margins of ‘big data’, from those less active contributors, or by actors in relatively marginalized positions. Such efforts may constitute counter-hegemonic knowledge production (Harris and Weiner 1998, see also Elwood 2015) or may be alternatively understood as a kind of minor data (to draw upon ‘minor theory’ in Katz 1996).

This session intends to contribute to these vibrant discussions by engaging with documenting efforts of mappings and data construction that might be of a much smaller quantity in the wake of big data. We welcome papers addressing theoretical, methodological, and empirical investigations of these mapping efforts situated in a variety of contexts. Questions may include, but are not limited to:

  • How might we engage with mapping the minor in the context of big data?
  • In what ways are data generated, represented, or curated by those who might be from a more marginalized position in these mapping efforts?
  • In what ways are geospatial technologies used, reconfigured, or contested in these mapping efforts?
  • In what ways is knowledge (re)produced in these mappings?
  • What are the challenges of tracing and documenting mapping efforts from the margins?
  • What might be the broader implications of these accounts?

If you are interested in participating in this session, please send an abstract of no more than 250 words to Wen Lin (wen.lin[at] and Matthew W. Wilson (matthew.w.wilson[at] by Friday, 2 October. Please note that we are attempting to bring alternative perspectives and positions to this discussion, in alignment with this recent manifesto on the gender and racial composition of AAG panels (


Crampton, J. 2009. Cartography: performative, participatory, political. Progress in Human Geography, 33(6): 840-848.

Elwood, S. 2015. Still Deconstructing the Map: Microfinance Mapping and the Visual Politics of Intimate Abstraction. Cartographica, 50(1): 45-49

Gerlach, J. 2014. Lines, contours, legends: coordinates for vernacular mappingProgress in Human Geography, 38(1): 22-39.

Goodchild, M. 2007. Citizens as sensors: the world of volunteered geography. GeoJournal, 69: 211-221.

Harris T., Weiner D. 1998. Empowerment, marginalization, and “community-integrated” GIS. Cartography and Geographic Information Systems, 25(2): 67-76.

Katz, C. 1996. Towards minor theory. Environment & Planning D: Society & Space, 14: 487-499.

Mooney, P. 2015. An Outlook for OpenStreetMap.  In J. Jokar Arsanjani, A. Zipf, P. Mooney, M. Helbich (eds.) OpenStreetMap in GIScience: Experiences, Research, and Applications. Cham, Springer, pp. 319-314.

Schuurman, N. 2015. What is alt.gis? Introduction to the Special Issue. The Canadian Geographer, 59(1): 1-2.

Turner. A. 2006. An Introduction to Neogeography. O’Reilly Media, Sebastapol, CA.

Coded language online

By which I mean code as cipher or ‘secret’ (sort of) rather than code as in mark-up…

The UK Child Exploitation and Online Protection centre and ParentZone maintain a website for parents called that has recently published a guide to “online teen speak“, with the rationale that parents should know what their children are up to.

There are some entertaining examples and it just goes to confirm I am now ‘old’ because I simply don’t recognise some of this stuff…

Now, in a way I have some sympathy with this rationale insofar as anyone (not just people of a given age bracket) can be naive in their behaviour through mediated communications. Whereas the more established risks of rumour and defamation might feel more distant the risks of the permanence of anything one posts online and its potential to spread unimaginably quickly are much more immediate and can be forcefully, sometimes tragically, felt.

Nevertheless, depending on your political persuasion, one might feel that there ought to be some limits to the oligopticon; that perhaps we ought not to be spying on one another all the time. It is thus striking that the rationale for ‘safety’ (in this case the entirely justified concern for the safety of our children) matches very closely to the sort of libertarian ‘nothing to hide’, radical transparency ideology of both the advertising moguls/data sharks at the helm of corporations such as Facebook and the supporters of technologies such as the blockchain. It is interesting (to me anyway) that the bounds of the conversation around privacy have moved so much in only 10 years… When the Labour government talked about introducing a national ID card around 2005ish, campaigns such as No2ID garnered a lot of support based on an appeal to arguments about personal privacy that are already (perhaps) beginning to seem slightly irrelevant, if one has maintained a use of Facebook, Google, Dropbox, PayPal, loyalty cards etc.

There is, of course, a significant difference between the perceived threats to ‘privacy’ by corporations hoovering up our data and the risk to personal safety from those with malign intentions – and I certainly don’t want to overly confuse the two… but the same rationale for an national identity card – to catch those with undesirable intentions – sits behind the perceived need to surveil our children and, indeed, was the rationale for ContactPoint here in the UK. This takes some very careful and nuanced unpacking (although it has attracted press commentary, e.g.) that I don’t think I’m best-placed to do here, but I wanted to record a reflection on this anyway.

The other thing that occurred to me while reading the press coverage of the guide to ‘teen speak’ was that this really isn’t new. As was made very evident by an episode of Fry’s English Delight on BBC Radio 4 the other week – there is a very, very long history of secret languages, and these are often used by groups that feel in a minority, or are a clique… from back slang, or ‘pig latin’ to medical slang (says the person most likely to be TTFO). But the power of these dialects is that they are secret – so what will a handy guide to them being published online do… we shall see… I’ll wait to hear from friends and colleagues with children that fall within the age group…

As an aside, ParentZone are convening an interesting looking conference in October on the Digital Family… worth a look if you’re that way inclined.