Internet of Things, ownership and Ts & Cs

iot-toothpaste
Toothpaste terms of service

Decided to make a spoof image that follows some others’ attempts to satirically reflect on the kinds of business models that seem to be creeping in for ‘Internet of Things’ products and services. My impetus is that I’ve enjoyed some of the recent posts on the @internetofshit satirical twitter stream, which lampoons IoT business ideas. These got me thinking…

Many of the successful posts take to the extreme a model we are already experiencing – which is that we do not necessarily totally control those things we think we own. I am aware that other folk will probably have commented in more depth and with greater nuance, but there we are… this is just a blogpost! (I welcome suggestions for further reading though)

For example – I recently bought a Kindle Paperwhite and to remove the inbuilt advertising I had to pay (in addition to the retail price) a £10 fee to ‘unsubscribe’ from ‘Special Offers‘. So, I had bought the device but to remove the adverts I had to pay more.

This, of course, resonates with the inkjet printer business model – in which the printer manufacturer can almost give away some models because the ink itself is highly lucrative, which led to stories comparing it’s value to that of gold…

In my most recent lecture for my third-year option module (Geographies of Technology) I addressed some of these issues and invited the students to consider the following questions when thinking about an ‘internet of things and places’:

Questions of ownership/responsibility:

  • Whose things?
  • Whose data?
  • Who has access? How? When? Where?

Questions of power:

  • How are decisions made on the basis of the data?
  • How doe these decisions influence our lives?

Questions of value:

  • How can/should we negotiate the value(s) of our data?
  • What are we willing to give(-up) for perceived benefits?
    • When does giving away lots of data become not worth it?

Later the same day, on the train home, I idly tweeted a speculative satirical scenario:

Which led me to create a still image (above). I think there’s a lot of scope of using speculative design techniques in a satirical way to provoke more debate about the kinds of relationship we want to enter into with and through the technologies we bring into our everyday lives. My key inspiration here is Anne Galloway‘s work, especially the beautiful Counting Sheep project.

Reblog> New paper: Locative media and data-driven computing experiments @syperng & @robkitchin

A really interesting paper by Sung-Yueh Perng, Rob Kitchin and Leighton Evans, definitely worth a read.

New paper: Locative media and data-driven computing experiments

Sung-Yueh Perng, Rob Kitchin and Leighton Evans have published a new paper entitled ‘Locative media and data-driven computing experiments‘ available as Programmable City Working Paper 16 on SSRN.

Abstract

Over the past two decades urban social life has undergone a rapid and pervasive geocoding, becoming mediated, augmented and anticipated by location-sensitive technologies and services that generate and utilise big, personal, locative data. The production of these data has prompted the development of exploratory data-driven computing experiments that seek to find ways to extract value and insight from them. These projects often start from the data, rather than from a question or theory, and try to imagine and identify their potential utility. In this paper, we explore the desires and mechanics of data-driven computing experiments. We demonstrate how both locative media data and computing experiments are ‘staged’ to create new values and computing techniques, which in turn are used to try and derive possible futures that are ridden with unintended consequences. We argue that using computing experiments to imagine potential urban futures produces effects that often have little to do with creating new urban practices. Instead, these experiments promote big data science and the prospect that data produced for one purpose can be recast for another, and act as alternative mechanisms of envisioning urban futures.

Keywords: Data analytics, computing experiments, locative media, location-based social network (LBSN), staging, urban future, critical data studies

The paper is available for download here.

Reblog> New paper: Anticipatory Logics of the Global Smart City Imaginary by @jimmerricks

Over on the programmable city website there’s news of a new paper by Jim Merricks White on the anticipatory logics of smart cities… I have previous here so it’ll be an interesting read!

New paper: Anticipatory Logics of the Global Smart City Imaginary

Jim White’s paper, ‘Anticipatory Logics of the Global Smart City Imaginary’, is available for download on the Social Science Research Network as Programmable City Working Paper 8.

Abstract

The smart city encompasses a broad range of technological innovations which might be applied to any city for a broad range of reasons. In this article, I make a distinction between local efforts to effect the urban landscape, and a global smart city imaginary which those efforts draw upon and help sustain. While attention has been given to the malleability of the smart city concept at this global scale, there remains little effort to interrogate the way that the future is used to sanction specific solutions. Through a critical engagement with smart city marketing materials, industry documents and consultancy reports, I explore how the future is recruited, rearranged and represented as a rationalisation for technological intervention in the present. This is done across three recurring crises: massive demographic shifts and subsequent resource pressure; global climate change; and the conflicting demands of fiscal austerity and the desire of many cities to attract foreign direct investment and highly-skilled workers. In revealing how crises are pre-empted, precautioned and prepared for, I argue that the smart city imaginary normalises a style and scale of response deemed appropriate under liberal capitalism.

Keywords: smart cities, the urban age, anticipation, risk

[pdf download]

For these companies you, walking down the street, are a data point

I’ve been collecting the promotional videos of various companies that surveil ‘public’ spaces to garner information that, using the logics applied to web analytics, they see as valuable commercial intelligence.

The rationale that is common throughout is that the aggregate crowd on the street, in a shopping centre, or in any form of apparently ‘public’ space are fair game for surveillance measurement and in turn address (albeit by their commercial partners – retailers etc. etc.). In fact, this is merely a technical or perhaps social problem – not a political one. It’s a technical problem for them because as the chap from Placemeter says “it’s all data” and its ‘waiting’ to be harvested. It’s a social problem for them only insofar as it’s about improving ‘services’ for us as consumers (not as citizens, as families or any other part of our skein of identity). It is NOT a political problem for them – it’s not an issue of who has the right to the city, who has a right to privacy or what might constitute reasonable expectations of any of those things. It certainly is never couched in terms of there needing to be governance of these activities – at least in these kinds of videos.

So, the videos are interesting artefacts of the formulation of what pervasive media/ubiquitous computing and smart cities look like and how they are performed…

A reasonable article about Placemeter is in/on the Guardian Cities section.

Reblog> New paper: Data-driven, networked urbanism

This looks good –– added to my ‘to read’ pile 🙂

New paper: Data-driven, networked urbanism

A new paper, ‘Data-driven, networked urbanism’, has been published by Rob Kitchin as Programmable City Working Paper 14.  The paper has been prepared for the Data and the City workshop to be held at Maynooth University Aug 31th-Sept 1st.

Abstract
For as long as data have been generated about cities various kinds of data-informed urbanism have been occurring.  In this paper, I argue that a new era is presently unfolding wherein data-informed urbanism is increasingly being complemented and replaced by data-driven, networked urbanism.  Cities are becoming ever more instrumented and networked, their systems interlinked and integrated, and vast troves of big urban data are being generated and used to manage and control urban life in real-time. Data-driven, networked urbanism, I contend, is the key mode of production for what have widely been termed smart cities.  In this paper I provide a critical overview of data-driven, networked urbanism and smart cities focusing in particular on the relationship between data and the city (rather than network infrastructure or computational or urban issues), and critically examine a number of urban data issues including: the politics of urban data; data ownership, data control, data coverage and access; data security and data integrity; data protection and privacy, dataveillance, and data uses such as social sorting and anticipatory governance; and technical data issues such as data quality, veracity of data models and data analytics, and data integration and interoperability.  I conclude that whilst data-driven, networked urbanism purports to produce a commonsensical, pragmatic, neutral, apolitical, evidence-based form of responsive urban governance, it is nonetheless selective, crafted, flawed, normative and politically-inflected.  Consequently, whilst data-driven, networked urbanism provides a set of solutions for urban problems, it does so within limitations and in the service of particular interests.

Key words: big data, data analytics, governance, smart cities, urban data, urban informatics, urban science

Download PDF

What might happen when ‘things’ design themselves?

I’ve been meaning to flag this for a while… Chris Speed at Edinburgh (who gave the ‘dancing with data‘ talk I posted a while ago) is leading a project called ThingTank:

The ThingTank project identifies that ‘things’ may soon know more about lives than we do and may also be able to make suggestions about what is missing. The purpose of this project is to explore the potential for identifying novel patterns of use within the data that is streamed through the interaction between people and things, and things and things. Our project builds on research and innovation that has been established by the three investigators across the fields of Internet of Things, Social Experience Design and Machine Learning. Through a better understanding of how what data can tell us about how we use objects, new models of use will emerge and reinvigorate the role of things and people within design and manufacturing.

The project offers some interesting provocations both as research outcomes and as innovation in methods. It’s worth looking at this write-up by Chris on his website Fields.

Greenfield on the politics of Uber as ‘socially corrosive mobility’

On his occasional blog Speedbird, Adam Greenfield has written an entertaining and incisive blogpost about the ‘mobility brokers’ Uber – the software-sorted unlicensed alt-taxi providers.

The post is worth a read for the trenchant dissection of how Uber is a kind of sigil of some of the questionable politics arising from the so-called ‘smart city’. For example:

– Interpersonal exchanges are more appropriately mediated by algorithms than by one’s own competence.
This conception of good experience is not the only thing suggesting that Uber, its ridership or both are somewhat afraid of actual, unfiltered urbanity. Among the most vexing challenges residents and other users of any large urban place ever confront is that of trust: absent familiarity, or the prospect of developing it over a pattern of repeated interactions, how are people placed (however temporarily) in a position of vulnerability expected to determine who is reliable?

Like other contemporary services, Uber outsources judgments of this type to a trust mechanic: at the conclusion of every trip, passengers are asked to explicitly rate their driver. These ratings are averaged into a score that is made visible to users in the application interface: “John (4.9 stars) will pick you up in 2 minutes.” The implicit belief is that reputation can be quantified and distilled to a single salient metric, and that this metric can be acted upon objectively.

Drivers are, essentially, graded on a curve: their rolling tally, aggregated over the previous 500 passenger engagements, must remain above average not in absolute terms, but against the competitive set. Drivers whose scores drop beneath this threshold may not receive ride requests, and it therefore functions as an effective disciplinary mechanism. Judging from conversations among drivers, further, the criteria on which this all-important performance metric is assessed are subjective and highly variable, meaning that the driver has no choice but to model what they believe riders are looking for in the proverbial “good driver,” internalize that model and adjust their behavior accordingly.

What riders are not told by Uber – though, in this age of ubiquitous peer-to- peer media, it is becoming evident to many that this has in fact been the case for some time – is that they too are rated by drivers, on a similar five-point scale. This rating, too, is not without consequence. Drivers have a certain degree of discretion in choosing to accept or deny ride requests, and to judge from publicly-accessible online conversations, many simply refuse to pick up riders with scores below a certain threshold, typically in the high 3’s.
This is strongly reminiscent of the process that I have elsewhere called “differential permissioning,” in which physical access to everyday spaces and functions becomes ever-more widely apportioned on the basis of such computational scores, by direct analogy with the access control paradigm prevalent in the information security community. Such determinations are opaque to those affected, while those denied access are offered few or no effective means of recourse. For prospective Uber patrons, differential permissioning means that they can be blackballed, and never know why.

Uber certainly has this feature in comment with algorithmic reputation-scoring services like Klout. But all such measures stumble in their bizarre insistence that trust can be distilled to a unitary value. This belies the common-sense understanding that reputation is a contingent and relational thing – that actions a given audience may regard as markers of reliability are unlikely to read that way to all potential audiences. More broadly, it also means that Uber constructs the development of trust between driver and passenger as a circumstance in which algorithmic determinations should supplant rather than rely upon (let alone strengthen) our existing competences for situational awareness, negotiation and the detection of verbal and nonverbal social cues.

Interestingly, despite its deployment of mechanisms intended to assess driver and passenger reliability, the company goes to unusual lengths to prevent itself from being brought to accountability. Following the December 2014 Delhi rape incident, police investigators were stunned to realize that while Uber had been operating in India for some time, neither the .in website nor any other document they had access to listed a local office. They were forced to register for the app themselves (as well as download a third-party payment application) simply so they could hire an Uber car and have the driver bring them to the place where he believed his employers worked. Here we see William Gibson’s science-fictional characterization of 21st-century enterprise (“small, fast, ruthless. An atavism”¦all edge”) brought to pungent life.

Read the whole article.

Paper accepted – Memory programmes: the retention of collective life

I am pleased to share that I have recently had a paper accepted for Cultural Geographies, which will form part of a theme section/issue co-edited by Sarah Elwood and Katharyne Mitchell concerning “Technology, memory and collective knowing”–stemming from a session at the 2013 AAG in Los Angeles.

The paper is entitled ‘Memory programmes: the retention of collective life’ and builds upon a theoretical conference paper I gave at the Conditions of Mediation conference in 2013.

The aim of the article is to interrogate some key elements of how software has become a means of ‘industrialising’ memory, following Bernard Stiegler. This industrialisation of memory involves conserving and transmitting extraordinary amounts of data. Data that is both volunteered and captured in everyday life, and operationalised in large-scale systems. Such systems constitute novel sociotechnical collectives which have begun to condition how we perform our lives such that they can be recorded and retained.

To investigate the programmatic nature of our mediatised collective memory the article has three parts. The first substantive section looks at a number of technologies as means of capturing, operating upon and retaining our everyday activities in ‘industrial’ scale systems of memory. Particular attention is paid to the quasi-autonomous agency of these systems, that appear to operate at a scale and speed that exceeds a human capacity of oversight.

In the second section I look at the mnemonic capabilities of networked technologies of digital mediation as ‘mnemotechnologies’. Following Stiegler, these are technologies and technical supports that both support and reterritorialise what we collectively understand about our everyday lives.

The conclusion of the article addresses the ways in which an ‘industrialisation of memory’ both challenges and transforms the ways in which we negotiate collective life.

I have copied below the abstract and I’d be happy to share pre-publication copies, please contact me via email.

Abstract:

This article argues that, in software, we have created quasi-autonomous systems of memory that influence how we think about and experience life as such. The role of mediated memory in collective life is addressed as a geographical concern through the lens of ‘programmes’. Programming can mean ordering, and thus making discrete; and scheduling, making actions routine. This article addresses how programming mediates the experience of memory via networked technologies. Materially recording knowledge, even as electronic data, renders thought mentally and spatially discrete and demands systems to order it. Recorded knowledge also enables the ordering of temporal experience both as forms of history, thus the sharing of culture, and as the means of planning for futures. We increasingly retain information about ourselves and others using digital media. We volunteer further information recorded by electronic service providers, search engines and social media. Many aspects of our collective lives are now gathered in cities (via CCTV, cellphone networks and so on) and retained in databases, constituting a growing system of memory of parts of life otherwise forgotten or unthought. Using examples, this article argues that, in software, we have created industrialised systems of memory that influence how we think about living together.

Keywords: memory, technology, mnemotechnics, industrialisation, programming, Stiegler

Reblog > Anne Galloway at Mobilities & Design Workshop, Lancaster

The Mobilities and Design workshop (later this month) looks interesting, not least cos Anne will be joining from afar to talk about her excellent Counting Sheep project, as she says on her blog:

I’m really pleased to be participating (via video & Skype) in the Mobilities and Design Workshop at Lancaster University, 29-30 April, 2014.

The event is being live-streamed so you’ll be able to follow along, and this is what I’ll be talking about:

Why Count Sheep, and Other Tricky Questions About Speculative Design Ethnography

Governments around the world require livestock farmers to tag their animals and track their movements from birth to death. Mandated for the purposes of local biosecurity and global market access, electronic identification is also used to keep track of breeding information and health treatments. Combined with location technologies like GPS, and sensor technologies that can monitor individual animal health and external environmental conditions, livestock are now capable of generating and transmitting enormous amounts of data.

At the same time, farmers in the developed world respond to increased public concerns about animal welfare and environmental sustainability by developing new online forms of agricultural advocacy, or what they call “agvocacy”. The US-based AgChat Foundation, and its equivalents in the UK, Australia and New Zealand, use social media to promote greater public awareness of agricultural practices and connect producers and consumers through weekly online chats. A “farm to fork” traceability ethos underpins agvocacy efforts, and aligns well with technosocial imperatives related to the “Internet of Things” – or the ability to connect data-rich objects (including animals) to the Internet.

For the past three years I’ve spent a lot of time thinking about sheep, talking about sheep, and hanging out with sheep or other people who care about sheep. I’ve done this because I’m interested in what the emergent technologies and politics I describe above might mean for our longest domesticated livestock animal, and for the people who continue to produce and consume them. In most ways, this has been standard STS-based ethnographic research: participant observation, interviews, etc. But the systems that I describe aren’t fully formed–and may not ever fully form as imagined–so I needed to come up with complementary research methods that could help me apprehend the future, or more correctly possible futures, and for that I turned to design.

This presentation will first outline the speculative design ethnography (SDE) methods developed, and outputs created, for the “Counting Sheep: NZ Merino in an Internet of Things” research project. (I encourage people to check out the design scenarios for themselves.) Then I will reflect on the challenges and opportunities of this kind of hybrid research practice, paying particular attention to how future visions act in the present to construct multiple publics and co-produce knowledge. Finally, using preliminary responses to our work, I will consider the potential of SDE as a public engagement strategy, and the role of disinterested or disagreeable publics.

Related reading

Galloway, A. 2013. “Emergent Media Technologies, Speculation, Expectation and Human/Nonhuman Relations.” Journal of Broadcasting and Electronic Media 57(1): 53-65.

Galloway, A. 2013. “Towards Fantastic Ethnography and Speculative Design.” Ethnography Matters, 17 September, 2013.

Reblog > Nigel Thrift and Steven Koonin discuss urban science and big data

Stuart Elden points to an interesting video of a conversation with Nigel Thrift, discussing urban informatics, ‘big data’ and so on. Slight hint of Thrift buying into the rhetoric around ‘big data’ but still an interesting discussion…

Nigel Thrift, Vice Chancellor of University of Warwick, and Steven Koonin, Director of New York University’s Center for Urban Science and Progress, partners in this endeavour, discussed the emerging field of applied urban science and informatics, the opportunities it presents, and how it is challenging the way we think about information. The discussion was moderated by Sallie Keller, Director, Social Decision and Analytics Laboratory, Virginia Bioinformatics Institute at Virginia Tech.