Category Archives: data

Reblog> Social Justice in an Age of Datafication: Launch of the Data Justice Lab

Via The Data Justice Lab.

Social Justice in an Age of Datafication: Launch of the Data Justice Lab

The Data Justice Lab will be officially launched on Friday, 17 March 2017. Join us for the launch event at Cardiff University’s School of Journalism, Media and Cultural Studies (JOMEC) at 4pm. Three international speakers will discuss the challenges of data justice.

The event is free but requires pre-booking at

Data Justice Lab — Launch Event — Friday 17 March 4pm — Cardiff University

Our financial transactions, communications, movements, relationships, and interactions with government and corporations all increasingly generate data that are used to profile and sort groups and individuals. These processes can affect both individuals as well as entire communities that may be denied services and access to opportunities, or wrongfully targeted and exploited. In short, they impact on our ability to participate in society. The emergence of this data paradigm therefore introduces a particular set of power dynamics requiring investigation and critique.

The Data Justice Lab is a new space for research and collaboration at Cardiff University that has been established to examine the relationship between datafication and social justice. With this launch event, we ask: What does social justice mean in age of datafication? How are data-driven processes impacting on certain communities? In what way does big data change our understanding of governance and politics? And what can we do about it?

We invite you to come and participate in this important discussion. We will be joined by the following keynote speakers:

Virginia Eubanks (New America), Malavika Jayaram (Digital Asia Hub), and Steven Renderos (Center for Media Justice).

Virginia Eubanks is the author of Digital Dead End: Fighting for Social Justice in the Information Age (MIT Press, 2011) and co-editor, with Alethia Jones, of Ain’t Gonna Let Nobody Turn Me Around: Forty Years of Movement Building with Barbara Smith (SUNY Press, 2014). She is also the cofounder of Our Knowledge, Our Power (OKOP), a grassroots economic justice and welfare rights organization. Professor Eubanks is currently working on her third book, Digital Poorhouse, for St. Martin’s Press. In it, she examines how new data-driven systems regulate and discipline the poor in the United States. She is a Fellow at New America, a Washington, D.C. think tank and the recipient of a three-year research grant from the Digital Trust Foundation (with Seeta Peña Gangadharan and Joseph Turow) to explore the meaning of digital privacy and data justice in marginalized communities.

Malavika Jayaram is the Executive Director of the Digital Asia Hub in Hong Kong. Previously she was a Fellow at the Berkman Klein Center for Internet & Society at Harvard University, where she focused on privacy, identity, biometrics and data ethics. She worked at law firms in India and the UK, and she was voted one of India’s leading lawyers. She is Adjunct Faculty at Northwestern University and a Fellow with the Centre for Internet & Society, India, and she is on the Advisory Board of the Electronic Privacy Information Center (EPIC).

Steven Renderos is Organizing Director at the Center for Media Justice. With over 10 years of organizing experience Steven has been involved in campaigns to lower the cost of prison phone calls, preserving the Open Internet, and expanding community owned radio stations. Steven previously served as Project Coordinator of the Minnesotano Media Empowerment Project, an initiative focused on improving the quality and quantity of media coverage and representation of Latinos in Minnesota. He currently serves on the boards of Organizing Apprenticeship Project and La Asamblea de Derechos Civiles. Steven (aka DJ Ren) also hosts a show called Radio Pocho at a community radio station and spins at venues in NYC.

The event will be followed by a reception.

The internet is mostly bots(?)

When I am king, you will be first against the wall…

In an article for The Atlantic Adrienne LaFrance observes that a report by the security firm Imperva suggests that 51.8% of traffic online is bot traffic (by which they mean 51.8% of a sample of traffic [“16.7 billion bot and human visits collected from August 9, 2016 to November 6, 2016”] sent through their global content delivery network “Incapusla”):

Overall, bots—good and bad—are responsible for 52 percent of web traffic, according to a new report by the security firm Imperva, which issues an annual assessment of bot activity online. The 52-percent stat is significant because it represents a tip of the scales since last year’s report, which found human traffic had overtaken bot traffic for the first time since at least 2012, when Imperva began tracking bot activity online. Now, the latest survey, which is based on an analysis of nearly 17 billion website visits from across 100,000 domains, shows bots are back on top. Not only that, but harmful bots have the edge over helper bots, which were responsible for 29 percent and 23 percent of all web traffic, respectively.

LaFrance goes on to cite the marketing director of Imperva (who wants to sell you ‘security’ – he’s in the business of selling data centre services) to observe that:

“The most alarming statistic in this report is also the most persistent trend it observes,” writes Igal Zeifman, Imperva’s marketing director, in a blog post about the research. “For the past five years, every third website visitor was an attack bot.”

How do we judge this report? I find it difficult to know how representative this company’s representation of their data, although they are the purveyor of a ‘global content delivery network’. The numbers seem believable, given how long we’ve been hearing that the majority of traffic is ‘not human’ (e.g. a 2013 article in The Atlantic making a similar point and a 2012 ZDNet article saying the same thing: most web traffic is ‘not human’ and mostly malicious).

The ‘not human’ thing needs to be questioned a bit — yes, it’s not literally the result of a physical action but, then, how much of the activity on the electric grid can be said to be ‘not human’ too? I’d hazard that the majority of that so-called ‘not human’ traffic is under some kind of regular oversight and monitoring – it is, more or less, the expression of deliberative (human) agency. Indeed, to reduce the ‘human’ to what our simian digits can make happen seems ridiculous to me… We need a more expansive understanding of technical (as in technics) agency. We need more nuanced ways to come to terms with the scale and complexity of the ways we, as a species, produce and perform our experiences of everyday life – of what counts as work and the things we take for granted.

Investigating Google’s revolving door with governments – Tactical Tech Collective

Some really interesting work from the Tactical Team looking at the ways in which different people and their skills and knowledges move in and out of government and the ‘Alphabet empire’. Worth a full read, but here’s a snippet to whet the appetite…

The Alphabet Empire by Tactical Tech and La Loma as shown in The Glass Room in New York. Based on openly available information, this 3-D infographic combines a quote from its chairman, Eric Schmidt, with a mapping of its acquisitions and investments.
By Google’s own admission, the company, like many others, cultivates close relationships with governmental bodies and public officials. Google disclosed that in 2015 it spent over €4 million on lobbying the European Union – considerably more than the €1 million on lobbying spent just three years previously in 2012.

But some of Google’s relationships with public bodies and officials come with a smaller price tag: Over the past ten years at least 80 people have been identified to have moved jobs between Google and European governments.

It’s this “revolving door” that formed the basis of our investigation. We started out with a number of questions: who were these people who had moved from Google to government or vice versa? Where exactly did they move from and to, and when? And most importantly how many of these questions could we find answers to using open, publicly-available information?

Here’s what we learned, and how we did it.

CFP> Creative propositions and provocations on the heritages of data-trade-place-value

Paula Crutchlow, with Ian Cook and I, invite submissions for the following session for this year’s RGS-IBG conference. Please do share this with anyone (doesn’t have to be geographers) who may be interested. As we say below, we welcome any kind of creative response to the theme. The session builds on Paula’s PhD project The Museum of Contemporary Commodities, which will be active before and throughout the conference in the RGS-IBG building.

Museum of Contemporary Commodities: creative propositions and provocations on the heritages of data-trade-place-value

How do we open out the messy digital geographies of trade, place and value to the world? How can we work with the digital beyond beyond archives, spectacle and techno-dystopian imaginations? How do we do so in a ways that are performative, collaborative and provocative of the digital?

This session builds on the planned hosting of the Museum of Contemporary Commodities (MoCC) in the RGS(IBG)’s Pavilion in the days leading up to the annual conference (and its partial installation in the RGS(IBG) building during the conference) where it will join the V&A, Science and Natural History Museums on London’s Exhibition Road. Developed as acts of valuing the things we buy today as the heritage of tomorrow, MoCC’s artworks take the form of dynamic, collaborative hacks and prototypes; socio-material processes, objects and events that aim to enrol publics in trade justice debates in light footed, life-affirming, surprising and contagious ways as part of their daily routines.

We invite prospective participants to offer propositions and provocations that stitch into or unpick the complex and sometime knotty patchwork quilt of data-trade-place-value. This is an invitation to contribute to and convene conversations that enliven geographical understandings of the governance, performance, placings and values/valuing of contemporary (digitally) mediated material culture. The resulting session is not conceived as a ‘conventional’ paper session. We invite submissions of ten-minute contributions that might take various forms, which might include essay, performance, video and many other creative responses to the theme.

This invitation should be understood in its broadest sense. We are interested in the commingling and mash-up of the theme(s) data-trade-place-value. We very much encourage submissions that push back against the normative authorities or discourses surrounding ‘the digital’ (however that might be conceived). So, we hope that all involved in the session will thereby be challenged and inspired by creative propositions and provocations that begin to get to the heart of how we open out the messy digital geographies of trade, place and value to the world.
Themes could include:

  • lively methods that work with and through participatory media
  • intimacy, humour, trust and the internet of things
  • mashups, subversions and hacks of big data from the bottom up
  • discourses and practises of future orientation and the spatial imaginations of ‘the digital’
  • an intersectional internet and the rise of ‘platforms’
  • alternative trade models, value systems and networked culture
  • DIWO (Do It With Others), scholar-activism & public pedagogy
  • the economic geographies of the battle for ‘open’
  • Please submit 250 word abstracts to us by email by 7 February and we will get back to you by 13 February.

Spacing social media – seminar at Swansea (18th Jan)

I am participating in the geography seminar series at Swansea next week. I’ll be talking about some of the ideas that came out of the work we did with social media for the Contagion project.

Mostly the talk is about how ideas about space and spatial experience are important to understanding social media. This, very broadly, appears in two ways: (1) like any technology, how we use social media performs, reflects and reveals forms of spatial understanding and experience; and (2) both the methods and the subsequent analysis we do of social media, as geographers (but also that done in other disciplinary contexts), carry assumptions about space that perhaps need to be made more explicit (especially when methodological techniques carry contradictory assumptions about space to the ideas we then employ in our analysis). This comes from a far-too-long reflection on a manuscript written for publication that had some issues and in reflecting on those issues I realised that there were some interesting geographical issues to make more explicit.

Anyway, the seminar is at 2pm on the 18th of January in Glyndwr E (see 11.1 on this campus map). Hope to see a few people there…

Here’s the abstract:

Spacing Social Media

This talk will interrogate the promise as well as the critical implications of the emerging geographies of social media. In particular, the spacing of social media will be addressed in terms of the ways we might understand and theorise space and spatiality. There will be three parts to the discussion: First, the promise of social media research is addressed through an initial exploration of how those media are ineluctably entangled in changes within social, economic and political fields. Second, the translations of data in social media research are addressed through the applications and techniques involved. Third, this provides a basis for subsequent discussion of the theoretical implications of digital data methods and their spacings. I will argue that the techniques and discourses of social media methods both imply and challenge forms of spatial understanding that present challenges for geographical research.

The Ethics of Information Literacy

Via Michael Sacasas

Yesterday, I caught Derek Thompson of The Atlantic discussing the problem of “fake news” on NPR’s Here and Now. It was all very sensible, of course. Thompson impressed upon the audience the importance of media literacy. He urged listeners to examine the provenance of the information they encounter. He also cited an article that appeared in […]

Read the full article.

The awkwardness of data debates, or how social scientists & policymakers don’t talk

I feel prompted to write something I’ve been puzzling over for a while because of a tweet and post on medium [The commodification of data, by Ade Adewunmi] I saw recently:

It’s a good post, but for some academic social scientists this is now an established argument that’s been developed, been the subject of conferences and books and so on. For a while now, I’ve had a sense of an awkward gap between the conversations about the various concerns for ‘data’ I witness through social media. In particular, I’ve been struck by how different the conversations are between (social sciences) academics from those involved in the development and running of ‘digital’ government services*. I recognise that the following is a bit of a caricature but the quick characterisation serves to assist the wider point I’m interested in exploring.

The fellow academics I follow (mostly in geography but from across the social sciences) have a relatively developed set of political and ethical arguments about the analysis (commercial & governmental–often blurred), big-ness,  collecting/gathering, transformation and so on of digital ‘data’, more often than not with reference to tropes around governance, labour, privacy and surveillance and ‘subjectivity’ (usually in the frame of how we are made individual subjects). So, ‘data’ in this set of debates may signal, for some, negative connotations of commercial or institutional ‘big brother’ and so on. There, of course, plenty of reasons to feel this way.

The digital government services folk, and some of the digital research services people (e.g. from JISC), that I follow often have more diverse and opaque (to me) views. A common foundation for many is the broadly liberal set of arguments for ‘open‘ networked services, somewhere between Stewart Brand’s libertarianism (in the vein of the arguments around “information wants to be free“) and the systematic optimistic liberalism of the W3C: “web for all, web on everything“. Some blog and tweet about the challenges of implementing that ethos and the various systems/techniques developed as a result within the auspices of government. Others write about what is and can be achieved by pursuing the ‘open’ agenda in government. More often that not, there is a positive and ‘progressive’ slant to the debate – developing a ‘common good’ (for want of a better phrase).

The debates do not crossover in my experience. They have their own  pet concepts and specialist terminology, with academics (like me) banging on about ‘dataveillance’, ‘discipline’ and ‘control’, governmentality, and, of course, ‘neoliberalism’; whereas the digital government folk I follow can talk about ‘digital’ and ‘open’ (as nouns), ‘agile‘ and ‘lean‘ (also sometimes nouns) practices. I am not saying any of this is representative, simply pointing out that the kinds of conversation are rather different. Neither of these groupings (as I characterise them) talk about or suggest policy in any detail, which is interesting. Social scientists studying ‘data’ (etc) often discuss methodological technique and diagnose what are perceived to be negative aspects of digital systems, whereas digital government folk are often highlighting progress being made in making ‘public’ data and associated services ‘open’ and more accessible. This may be an issue of ‘methods’. To be (perhaps overly) general – the social scientists I follow do particular kinds of, often, politically inflected research, whereas the digital government folk I follow are attempting to build politically neutral services. So, here, the academics are looking for expressions of power and politics, the digital government folk are attempting to minimise their effects.

We are left with what appears to be an unfortunate gap in a possibly fruitful conversation – there are constructive ways that academic researchers can offer insights into how opaque power structures can operate and, likewise, the digital government folk actually have experience of making complex digital systems for government. At present, in my Twitter stream I see (at best) mutual suspicion and often just totally separate conversations. There are moments though and some academics are clearly engaging albeit ‘critically’, e.g.

I recognise my partiality – that there are more than likely more in-depth conversations going on that I’m missing and I do think there’s some really positive work going on, for example as part of the Programmable City project – for example see the great talk by Sung-Yueh Perng below, that is attempting to look at what it means to build digital public services and the kinds of contributions social scientists (like me – there are lots of other kinds of course!) can make.

I welcome suggestions and comments about this, so please do get in touch.

Sung-Yueh Perng – Creating infrastructures with citizens: An exploration of Beta Projects, Dublin City Council from The Programmable City on Vimeo.

* I am not claiming that those I follow on Twitter and are pigeonholing with this category are representative in any way, this just works for this broad example.

A quantitative ideology? James Bridle on an algorithmic imaginary

The excellent artist James Bridle has written something for the New Humanist, which is published on their website, entitled “What’s wrong with big data?” Perhaps he’s been reading Rob Kitchin’s The Data Revolution? 🙂 Anyway, it sort of chimes with my previous post on data debates and with the sense in which the problems Bridle so incisively lays out for the readers of his article are not necessarily practical problems but rather are epistemological problems – they pertain to the ways in which we are asked to make sense of the world…

This belief in the power of data, of technology untrammelled by petty human worldviews, is the practical cousin of more metaphysical assertions. A belief in the unquestionability of data leads directly to a belief in the truth of data-derived assertions. And if data contains truth, then it will, without moral intervention, produce better outcomes. Speaking at Google’s private London Zeitgeist conference in 2013, Eric Schmidt, Google Chairman, asserted that “if they had had cellphones in Rwanda in 1994, the genocide would not have happened.” Schmidt’s claim was that technological visibility – the rendering of events and actions legible to everyone – would change the character of those actions. Not only is this statement historically inaccurate (there was plenty of evidence available of what was occurring during the genocide from UN officials, US satellite photographs and other sources), it’s also demonstrably untrue. Analysis of unrest in Kenya in 2007, when over 1,000 people were killed in ethnic conflicts, showed that mobile phones not only spread but accelerated the violence. But you don’t need to look to such extreme examples to see how a belief in technological determinism underlies much of our thinking and reasoning about the world.

Quantified thinking is the dominant ideology of contemporary life: not just in scientific and computational domains but in government policy, social relations and individual identity. It exists equally in qualified research and subconscious instinct, in the calculations of economic austerity and the determinacy of social media. It is the critical balance on which we have placed our ability to act in the world, while critically mistaking the basis for such actions. “More information” does not produce “more truth”, it endangers it.

You can read the whole article on the New Humanist website.

Video> Imagining automation – public talk

I gave a talk for the SW Futurists meet up group this week and they’ve recorded the talks. There are two speakers: Lucas Godfrey (Edinburgh) talked about the challenges of creating models of phenomena in the world so that you can automate things. I talked about the politics of the kinds of stories we tell about automation and how they orient our understandings of how automation might function. Both are included in the video but I’ve skipped to the start of my talk below.

Feel free to leave comments, ask questions etc. using the “Comments” function below… this presentation is sort of based on two bits of work about automation that have been developing as academic presentations. The first is about how we tell stories about work in relation to automation, and way we use ‘algorithm’ as a proxy for that idea. The second is about how we imagine what apparently automated/automatic technologies are doing and what they can do. I think both of these things constitute what I’ve come to call an “automative imaginary”… I started out calling this “algorithmic—“, but I don’t think that ‘s what I have ever really meant. I also don’t think, another fashionable term, “robots” is a particularly helpful way to frame the ideas I’m interested in. Anyway, I’m hoping to develop this into a journal article.