Category Archives: software studies

Reblog> Social Justice in an Age of Datafication: Launch of the Data Justice Lab

Via The Data Justice Lab.

Social Justice in an Age of Datafication: Launch of the Data Justice Lab

The Data Justice Lab will be officially launched on Friday, 17 March 2017. Join us for the launch event at Cardiff University’s School of Journalism, Media and Cultural Studies (JOMEC) at 4pm. Three international speakers will discuss the challenges of data justice.

The event is free but requires pre-booking at https://www.eventbrite.com/e/social-justice-in-an-age-of-datafication-launching-the-data-justice-lab-tickets-31849002223

Data Justice Lab — Launch Event — Friday 17 March 4pm — Cardiff University

Our financial transactions, communications, movements, relationships, and interactions with government and corporations all increasingly generate data that are used to profile and sort groups and individuals. These processes can affect both individuals as well as entire communities that may be denied services and access to opportunities, or wrongfully targeted and exploited. In short, they impact on our ability to participate in society. The emergence of this data paradigm therefore introduces a particular set of power dynamics requiring investigation and critique.

The Data Justice Lab is a new space for research and collaboration at Cardiff University that has been established to examine the relationship between datafication and social justice. With this launch event, we ask: What does social justice mean in age of datafication? How are data-driven processes impacting on certain communities? In what way does big data change our understanding of governance and politics? And what can we do about it?

We invite you to come and participate in this important discussion. We will be joined by the following keynote speakers:

Virginia Eubanks (New America), Malavika Jayaram (Digital Asia Hub), and Steven Renderos (Center for Media Justice).

Virginia Eubanks is the author of Digital Dead End: Fighting for Social Justice in the Information Age (MIT Press, 2011) and co-editor, with Alethia Jones, of Ain’t Gonna Let Nobody Turn Me Around: Forty Years of Movement Building with Barbara Smith (SUNY Press, 2014). She is also the cofounder of Our Knowledge, Our Power (OKOP), a grassroots economic justice and welfare rights organization. Professor Eubanks is currently working on her third book, Digital Poorhouse, for St. Martin’s Press. In it, she examines how new data-driven systems regulate and discipline the poor in the United States. She is a Fellow at New America, a Washington, D.C. think tank and the recipient of a three-year research grant from the Digital Trust Foundation (with Seeta Peña Gangadharan and Joseph Turow) to explore the meaning of digital privacy and data justice in marginalized communities.

Malavika Jayaram is the Executive Director of the Digital Asia Hub in Hong Kong. Previously she was a Fellow at the Berkman Klein Center for Internet & Society at Harvard University, where she focused on privacy, identity, biometrics and data ethics. She worked at law firms in India and the UK, and she was voted one of India’s leading lawyers. She is Adjunct Faculty at Northwestern University and a Fellow with the Centre for Internet & Society, India, and she is on the Advisory Board of the Electronic Privacy Information Center (EPIC).

Steven Renderos is Organizing Director at the Center for Media Justice. With over 10 years of organizing experience Steven has been involved in campaigns to lower the cost of prison phone calls, preserving the Open Internet, and expanding community owned radio stations. Steven previously served as Project Coordinator of the Minnesotano Media Empowerment Project, an initiative focused on improving the quality and quantity of media coverage and representation of Latinos in Minnesota. He currently serves on the boards of Organizing Apprenticeship Project and La Asamblea de Derechos Civiles. Steven (aka DJ Ren) also hosts a show called Radio Pocho at a community radio station and spins at venues in NYC.

The event will be followed by a reception.

The internet is mostly bots(?)

When I am king, you will be first against the wall…

In an article for The Atlantic Adrienne LaFrance observes that a report by the security firm Imperva suggests that 51.8% of traffic online is bot traffic (by which they mean 51.8% of a sample of traffic [“16.7 billion bot and human visits collected from August 9, 2016 to November 6, 2016”] sent through their global content delivery network “Incapusla”):

Overall, bots—good and bad—are responsible for 52 percent of web traffic, according to a new report by the security firm Imperva, which issues an annual assessment of bot activity online. The 52-percent stat is significant because it represents a tip of the scales since last year’s report, which found human traffic had overtaken bot traffic for the first time since at least 2012, when Imperva began tracking bot activity online. Now, the latest survey, which is based on an analysis of nearly 17 billion website visits from across 100,000 domains, shows bots are back on top. Not only that, but harmful bots have the edge over helper bots, which were responsible for 29 percent and 23 percent of all web traffic, respectively.

LaFrance goes on to cite the marketing director of Imperva (who wants to sell you ‘security’ – he’s in the business of selling data centre services) to observe that:

“The most alarming statistic in this report is also the most persistent trend it observes,” writes Igal Zeifman, Imperva’s marketing director, in a blog post about the research. “For the past five years, every third website visitor was an attack bot.”

How do we judge this report? I find it difficult to know how representative this company’s representation of their data, although they are the purveyor of a ‘global content delivery network’. The numbers seem believable, given how long we’ve been hearing that the majority of traffic is ‘not human’ (e.g. a 2013 article in The Atlantic making a similar point and a 2012 ZDNet article saying the same thing: most web traffic is ‘not human’ and mostly malicious).

The ‘not human’ thing needs to be questioned a bit — yes, it’s not literally the result of a physical action but, then, how much of the activity on the electric grid can be said to be ‘not human’ too? I’d hazard that the majority of that so-called ‘not human’ traffic is under some kind of regular oversight and monitoring – it is, more or less, the expression of deliberative (human) agency. Indeed, to reduce the ‘human’ to what our simian digits can make happen seems ridiculous to me… We need a more expansive understanding of technical (as in technics) agency. We need more nuanced ways to come to terms with the scale and complexity of the ways we, as a species, produce and perform our experiences of everyday life – of what counts as work and the things we take for granted.

Microsoft Cognitive Services

Microsoft Cognitive Services (sounds like something from a Phillip K. Dick novel) have opened up APIs, which you can call on (req. subscription), to outsource forms of machine learning. So, if you want to identify faces in pictures or videos you can call on the “Face API“, for example. Obviously, this is all old news… but, it’s sort of interesting to maybe think about how this foregrounds the homogenisation of process – the apparent ‘power’ of these particular programmes (accessed via their APIs) may be their widespread use.

This might be of further interest when we consider things like the “Emotion API” through which (in line with many other forms of programmatic measure of the display or representation of ’emotion’ or ‘sentiment’) the programme scores a facial expression along several measures”, listed in the free example as: “anger”, “contempt”, “disgust”,” fear, “happiness”, “neutral”, “sadness”, “surprise”. For each image you’ll get a table of scores for each recognised face. Have a play – its beguiling, but of course then perhaps prompts the sorts of questions lots of people have been asking about how ‘affect’ and emotions can get codified (e.g. Massumi) and the politics and ethics of the ‘algorithms’ and such like that do these things (e.g. Beer).

I am probably late to all of this and seeing significance here because it’s relatively novel to me (not the tech itself but the ‘easy-to-use’ API structure), nevertheless it seems interesting, to me at least, that these forms of machine learning are being produced as mundane through being made abundant, as apparently straightforward tools. Maybe what I’m picking up on is that these APIs, the programmes they grant access to, are relatively transparent, whereas much of what various ‘algorithm studies’ folk look at is opaque.  Microsoft’s Cognitive Services make mundane what, to some, are very political technologies.

 

Institute of Network Cultures podcast

This looks interesting… I confess I’ve not listened yet.

The INC has a new publication format: the Zero Infinite podcast!

In May 2016, we invited two podcast hosts to our symposium on art criticism: Stephanie Afrifa (of Nation of Overthinkers podcast network) and Botte Jellema (host of De Eeuw van de Amateur (The Century of the Amateur) podcast). They spoke about the medium with such enthusiasm, that one of our affiliated researchers, Nadine Roestenburg, decided to start her own podcast on post-digital art and we soon felt it was time for the INC to try our hand at it as well! Botte came back to teach us some more skills and to let us experience the delight that is a good microphone. We tested and experimented for some time, and are now very happy to be able to present the very first episode of Zero Infinite to you!

LISTEN TO THE FIRST EPISODE HERE.

The podcast is hosted by Miriam Rasch, and cover topics like digital publishing, economic alternatives, revenue models in the arts and online culture. The first episode features interviews with Alex Foti, Baruch Gottlieb and Henry Warwick, and a discussion on precarity and anti-austerity measures. The latter half of the episode is a homage to the work of the late Mark Fisher. We discuss his ideas on neoliberalism and its influence on individual wellbeing though clips from his talk at MyCreativity in 2014.

Subscribe to our Soundcloud channel, or find the newest episode on our publications page.

Affect & Social Media 3.0 CFP

Via Tony Sampson

Affect and Social Media#3 2nd CFP

main2Call for presentations and artworks

Affect and Social Media#3

experience

engagement 

entanglement                                

Including the Sensorium Art Show (the sequel)

Event Date: Thurs 25th May, 2017

Venue: University of East London, Docklands Campus

Confirmed keynote: Prof Jessica Ringrose (UCL) 

https://iris.ucl.ac.uk/iris/browse/profile?upi=JLRIN58

Call for 15min presentations and artworks

The organizers of A&SM#3 welcome proposals for 15min presentations and artworks that interpret and explore the affective, feely and emotional encounters with social media grasped through the following themes:

  1. Experience
  2. Engagement
  3. Entanglement 

Presentations and artworks can widely interpret each theme, but preference will be given to proposals that respond in two ways.

Firstly, the organizers are particularly interested in creative responses (academic and artistic) to recent social media events – the US election, for example. So proposals might address how the Trump win allows us to develop a fresh understanding of shared experiences, emotional engagements or new entanglements with social media.

Secondly, we ask presenters and artists to consider how their approach to affect and social media can be put to work in an education context. For example, how can the potential of affect theory reach out across teaching practices and develop novel understandings of the political nature and transformative possibilities of teaching.

The academic part of this call is open to experienced scholars, new researchers and postgrad students from across the disciplinary boundaries of affect studies and related areas of study interested in theorizing and working with emotion and feelings in a social media context. We welcome a good mixture of innovative conceptual and methodological approaches.

The Sensorium Art Exhibit will interweave the conference proceedings and bring it to a close with a special show, alongside free drinks and nibbles.

15min presentations and artwork proposals to: t.d.sampson@uel.ac.uk

Please include 200 word max description and short bio including academic affiliation and relevant links to previous work and/or website profile.

DEADLINE: Tues 28th Feb 2017.

Full registration details will be made available from 27th Jan via UEL event page.

https://www.uel.ac.uk/Events/2017/05/Affect-and-Social-Media-3

Ambient literature – new research project

Former colleagues at UWE in the Digital Cultures Research Centre are formally launching their project on what they call ‘ambient literature’ this Friday.

There’s some info on the project copied below, it follows on from a trajectory you can trace through the ‘pervasive media’ canon (with the lovely people from Calvium [many formerly of HP Labs Bristol] instrumental in how this has been technically achieved), from the Mobile Bristol RIOT! 1831 project, Duncan Speakman’s subtle mobs, the fabulous Fortnight project from Proto-typeCurzon Memories, REACT projects like These Pages Fall Like Ash and (my colleague Nicola Thomas’) Dollar Princess – a rich and varied history of work…

Ambient Literature is a two-year collaboration between the University of West EnglandBath Spa University, the University of Birmingham, and development partners Calvium, Ltd. established to investigate the locational and technological future of the book. Funded through a grant from the Arts and Humanities Research Council, the project is focused on the study of emergent forms of literature that make use of novel technologies and social practices in order to create robust and evocative experiences for readers.Launched in London, Bristol and online in June 2016, the project draws on the REACT Hub’s experience working with creative industries in order to produce three experimental projects from three different authors. Forming the heart of the project, these commissioned pieces allow researchers to study the processes of innovation and negotiation that become visible as established authors work in the new forms opened up by the idea of Ambient Literature. Combining practice-based, empirical and theoretical research, the project seeks to test out new literary forms and develop a grammar for writing Ambient Literature.

This is an interdisciplinary project focused on understanding how the situation of reading is changing through pervasive and ubiquitous computing. Drawing on literary studies, creative writing, design, human-computer interaction, performance and new media studies, the research being developed looks to engage with the history of the book and see what that history is able to tell us about its future.

The awkwardness of data debates, or how social scientists & policymakers don’t talk

I feel prompted to write something I’ve been puzzling over for a while because of a tweet and post on medium [The commodification of data, by Ade Adewunmi] I saw recently:

It’s a good post, but for some academic social scientists this is now an established argument that’s been developed, been the subject of conferences and books and so on. For a while now, I’ve had a sense of an awkward gap between the conversations about the various concerns for ‘data’ I witness through social media. In particular, I’ve been struck by how different the conversations are between (social sciences) academics from those involved in the development and running of ‘digital’ government services*. I recognise that the following is a bit of a caricature but the quick characterisation serves to assist the wider point I’m interested in exploring.

The fellow academics I follow (mostly in geography but from across the social sciences) have a relatively developed set of political and ethical arguments about the analysis (commercial & governmental–often blurred), big-ness,  collecting/gathering, transformation and so on of digital ‘data’, more often than not with reference to tropes around governance, labour, privacy and surveillance and ‘subjectivity’ (usually in the frame of how we are made individual subjects). So, ‘data’ in this set of debates may signal, for some, negative connotations of commercial or institutional ‘big brother’ and so on. There, of course, plenty of reasons to feel this way.

The digital government services folk, and some of the digital research services people (e.g. from JISC), that I follow often have more diverse and opaque (to me) views. A common foundation for many is the broadly liberal set of arguments for ‘open‘ networked services, somewhere between Stewart Brand’s libertarianism (in the vein of the arguments around “information wants to be free“) and the systematic optimistic liberalism of the W3C: “web for all, web on everything“. Some blog and tweet about the challenges of implementing that ethos and the various systems/techniques developed as a result within the auspices of government. Others write about what is and can be achieved by pursuing the ‘open’ agenda in government. More often that not, there is a positive and ‘progressive’ slant to the debate – developing a ‘common good’ (for want of a better phrase).

The debates do not crossover in my experience. They have their own  pet concepts and specialist terminology, with academics (like me) banging on about ‘dataveillance’, ‘discipline’ and ‘control’, governmentality, and, of course, ‘neoliberalism’; whereas the digital government folk I follow can talk about ‘digital’ and ‘open’ (as nouns), ‘agile‘ and ‘lean‘ (also sometimes nouns) practices. I am not saying any of this is representative, simply pointing out that the kinds of conversation are rather different. Neither of these groupings (as I characterise them) talk about or suggest policy in any detail, which is interesting. Social scientists studying ‘data’ (etc) often discuss methodological technique and diagnose what are perceived to be negative aspects of digital systems, whereas digital government folk are often highlighting progress being made in making ‘public’ data and associated services ‘open’ and more accessible. This may be an issue of ‘methods’. To be (perhaps overly) general – the social scientists I follow do particular kinds of, often, politically inflected research, whereas the digital government folk I follow are attempting to build politically neutral services. So, here, the academics are looking for expressions of power and politics, the digital government folk are attempting to minimise their effects.

We are left with what appears to be an unfortunate gap in a possibly fruitful conversation – there are constructive ways that academic researchers can offer insights into how opaque power structures can operate and, likewise, the digital government folk actually have experience of making complex digital systems for government. At present, in my Twitter stream I see (at best) mutual suspicion and often just totally separate conversations. There are moments though and some academics are clearly engaging albeit ‘critically’, e.g.

I recognise my partiality – that there are more than likely more in-depth conversations going on that I’m missing and I do think there’s some really positive work going on, for example as part of the Programmable City project – for example see the great talk by Sung-Yueh Perng below, that is attempting to look at what it means to build digital public services and the kinds of contributions social scientists (like me – there are lots of other kinds of course!) can make.

I welcome suggestions and comments about this, so please do get in touch.

Sung-Yueh Perng – Creating infrastructures with citizens: An exploration of Beta Projects, Dublin City Council from The Programmable City on Vimeo.

* I am not claiming that those I follow on Twitter and are pigeonholing with this category are representative in any way, this just works for this broad example.

A quantitative ideology? James Bridle on an algorithmic imaginary

The excellent artist James Bridle has written something for the New Humanist, which is published on their website, entitled “What’s wrong with big data?” Perhaps he’s been reading Rob Kitchin’s The Data Revolution? 🙂 Anyway, it sort of chimes with my previous post on data debates and with the sense in which the problems Bridle so incisively lays out for the readers of his article are not necessarily practical problems but rather are epistemological problems – they pertain to the ways in which we are asked to make sense of the world…

This belief in the power of data, of technology untrammelled by petty human worldviews, is the practical cousin of more metaphysical assertions. A belief in the unquestionability of data leads directly to a belief in the truth of data-derived assertions. And if data contains truth, then it will, without moral intervention, produce better outcomes. Speaking at Google’s private London Zeitgeist conference in 2013, Eric Schmidt, Google Chairman, asserted that “if they had had cellphones in Rwanda in 1994, the genocide would not have happened.” Schmidt’s claim was that technological visibility – the rendering of events and actions legible to everyone – would change the character of those actions. Not only is this statement historically inaccurate (there was plenty of evidence available of what was occurring during the genocide from UN officials, US satellite photographs and other sources), it’s also demonstrably untrue. Analysis of unrest in Kenya in 2007, when over 1,000 people were killed in ethnic conflicts, showed that mobile phones not only spread but accelerated the violence. But you don’t need to look to such extreme examples to see how a belief in technological determinism underlies much of our thinking and reasoning about the world.

Quantified thinking is the dominant ideology of contemporary life: not just in scientific and computational domains but in government policy, social relations and individual identity. It exists equally in qualified research and subconscious instinct, in the calculations of economic austerity and the determinacy of social media. It is the critical balance on which we have placed our ability to act in the world, while critically mistaking the basis for such actions. “More information” does not produce “more truth”, it endangers it.

You can read the whole article on the New Humanist website.

A lack of politics in the geographies of code/software?

An interesting provocation for those who feel wedded to the ‘digital turn‘ from Mark Purcell on his blog… in particular:

the work, in general, seems to be quite aloof, or detached, or trying to stay above the fray, to remain non-committal, as though that were the more professional, academic stance to take.  All this detachment seems to have produced an upshot that is something like: “with all the new technologies coming into our lives in the past 10 years or so, it is important to think through their implications instead of just adopting them uncritically.”

Perhaps those that do “geography o[f] software/ information/ geodata” would like to respond..(?) For me, I think, there is simply a difference in focus between Purcell’s locating of politics and, for example – his colleague at Washington, Sarah Elwood’s in relation to “geodata” (e.g.), i.e. perhaps the difference between a politics of production as such and a politics of implementation.

Nevertheless, Purcell’s point about commons and peer production in open source software is valid – perhaps those involved in recent conference sessions on geographies of software have addressed these issues in some way? (I don’t know, I wasn’t there…)

Read Mark Purcell’s full blogpost on his blog.