Via The Data Justice Lab.
“When I am king, you will be first against the wall…“
In an article for The Atlantic Adrienne LaFrance observes that a report by the security firm Imperva suggests that 51.8% of traffic online is bot traffic (by which they mean 51.8% of a sample of traffic [“16.7 billion bot and human visits collected from August 9, 2016 to November 6, 2016”] sent through their global content delivery network “Incapusla”):
Overall, bots—good and bad—are responsible for 52 percent of web traffic, according to a new report by the security firm Imperva, which issues an annual assessment of bot activity online. The 52-percent stat is significant because it represents a tip of the scales since last year’s report, which found human traffic had overtaken bot traffic for the first time since at least 2012, when Imperva began tracking bot activity online. Now, the latest survey, which is based on an analysis of nearly 17 billion website visits from across 100,000 domains, shows bots are back on top. Not only that, but harmful bots have the edge over helper bots, which were responsible for 29 percent and 23 percent of all web traffic, respectively.
LaFrance goes on to cite the marketing director of Imperva (who wants to sell you ‘security’ – he’s in the business of selling data centre services) to observe that:
“The most alarming statistic in this report is also the most persistent trend it observes,” writes Igal Zeifman, Imperva’s marketing director, in a blog post about the research. “For the past five years, every third website visitor was an attack bot.”
How do we judge this report? I find it difficult to know how representative this company’s representation of their data, although they are the purveyor of a ‘global content delivery network’. The numbers seem believable, given how long we’ve been hearing that the majority of traffic is ‘not human’ (e.g. a 2013 article in The Atlantic making a similar point and a 2012 ZDNet article saying the same thing: most web traffic is ‘not human’ and mostly malicious).
The ‘not human’ thing needs to be questioned a bit — yes, it’s not literally the result of a physical action but, then, how much of the activity on the electric grid can be said to be ‘not human’ too? I’d hazard that the majority of that so-called ‘not human’ traffic is under some kind of regular oversight and monitoring – it is, more or less, the expression of deliberative (human) agency. Indeed, to reduce the ‘human’ to what our simian digits can make happen seems ridiculous to me… We need a more expansive understanding of technical (as in technics) agency. We need more nuanced ways to come to terms with the scale and complexity of the ways we, as a species, produce and perform our experiences of everyday life – of what counts as work and the things we take for granted.
Microsoft Cognitive Services (sounds like something from a Phillip K. Dick novel) have opened up APIs, which you can call on (req. subscription), to outsource forms of machine learning. So, if you want to identify faces in pictures or videos you can call on the “Face API“, for example. Obviously, this is all old news… but, it’s sort of interesting to maybe think about how this foregrounds the homogenisation of process – the apparent ‘power’ of these particular programmes (accessed via their APIs) may be their widespread use.
This might be of further interest when we consider things like the “Emotion API” through which (in line with many other forms of programmatic measure of the display or representation of ’emotion’ or ‘sentiment’) the programme scores a facial expression along several measures”, listed in the free example as: “anger”, “contempt”, “disgust”,” fear, “happiness”, “neutral”, “sadness”, “surprise”. For each image you’ll get a table of scores for each recognised face. Have a play – its beguiling, but of course then perhaps prompts the sorts of questions lots of people have been asking about how ‘affect’ and emotions can get codified (e.g. Massumi) and the politics and ethics of the ‘algorithms’ and such like that do these things (e.g. Beer).
I am probably late to all of this and seeing significance here because it’s relatively novel to me (not the tech itself but the ‘easy-to-use’ API structure), nevertheless it seems interesting, to me at least, that these forms of machine learning are being produced as mundane through being made abundant, as apparently straightforward tools. Maybe what I’m picking up on is that these APIs, the programmes they grant access to, are relatively transparent, whereas much of what various ‘algorithm studies’ folk look at is opaque. Microsoft’s Cognitive Services make mundane what, to some, are very political technologies.
This looks interesting… I confess I’ve not listened yet.
Via Tony Sampson…
Call for presentations and artworks
Affect and Social Media#3
Including the Sensorium Art Show (the sequel)
Event Date: Thurs 25th May, 2017
Venue: University of East London, Docklands Campus
Confirmed keynote: Prof Jessica Ringrose (UCL)
Call for 15min presentations and artworks
The organizers of A&SM#3 welcome proposals for 15min presentations and artworks that interpret and explore the affective, feely and emotional encounters with social media grasped through the following themes:
Presentations and artworks can widely interpret each theme, but preference will be given to proposals that respond in two ways.
Firstly, the organizers are particularly interested in creative responses (academic and artistic) to recent social media events – the US election, for example. So proposals might address how the Trump win allows us to develop a fresh understanding of shared experiences, emotional engagements or new entanglements with social media.
Secondly, we ask presenters and artists to consider how their approach to affect and social media can be put to work in an education context. For example, how can the potential of affect theory reach out across teaching practices and develop novel understandings of the political nature and transformative possibilities of teaching.
The academic part of this call is open to experienced scholars, new researchers and postgrad students from across the disciplinary boundaries of affect studies and related areas of study interested in theorizing and working with emotion and feelings in a social media context. We welcome a good mixture of innovative conceptual and methodological approaches.
The Sensorium Art Exhibit will interweave the conference proceedings and bring it to a close with a special show, alongside free drinks and nibbles.
15min presentations and artwork proposals to: firstname.lastname@example.org
Please include 200 word max description and short bio including academic affiliation and relevant links to previous work and/or website profile.
DEADLINE: Tues 28th Feb 2017.
Full registration details will be made available from 27th Jan via UEL event page.
Former colleagues at UWE in the Digital Cultures Research Centre are formally launching their project on what they call ‘ambient literature’ this Friday.
There’s some info on the project copied below, it follows on from a trajectory you can trace through the ‘pervasive media’ canon (with the lovely people from Calvium [many formerly of HP Labs Bristol] instrumental in how this has been technically achieved), from the Mobile Bristol RIOT! 1831 project, Duncan Speakman’s subtle mobs, the fabulous Fortnight project from Proto-type, Curzon Memories, REACT projects like These Pages Fall Like Ash and (my colleague Nicola Thomas’) Dollar Princess – a rich and varied history of work…
I feel prompted to write something I’ve been puzzling over for a while because of a tweet and post on medium [The commodification of data, by Ade Adewunmi] I saw recently:
— Peter Wells (@peterkwells) October 30, 2016
It’s a good post, but for some academic social scientists this is now an established argument that’s been developed, been the subject of conferences and books and so on. For a while now, I’ve had a sense of an awkward gap between the conversations about the various concerns for ‘data’ I witness through social media. In particular, I’ve been struck by how different the conversations are between (social sciences) academics from those involved in the development and running of ‘digital’ government services*. I recognise that the following is a bit of a caricature but the quick characterisation serves to assist the wider point I’m interested in exploring.
The fellow academics I follow (mostly in geography but from across the social sciences) have a relatively developed set of political and ethical arguments about the analysis (commercial & governmental–often blurred), big-ness, collecting/gathering, transformation and so on of digital ‘data’, more often than not with reference to tropes around governance, labour, privacy and surveillance and ‘subjectivity’ (usually in the frame of how we are made individual subjects). So, ‘data’ in this set of debates may signal, for some, negative connotations of commercial or institutional ‘big brother’ and so on. There, of course, plenty of reasons to feel this way.
The digital government services folk, and some of the digital research services people (e.g. from JISC), that I follow often have more diverse and opaque (to me) views. A common foundation for many is the broadly liberal set of arguments for ‘open‘ networked services, somewhere between Stewart Brand’s libertarianism (in the vein of the arguments around “information wants to be free“) and the systematic optimistic liberalism of the W3C: “web for all, web on everything“. Some blog and tweet about the challenges of implementing that ethos and the various systems/techniques developed as a result within the auspices of government. Others write about what is and can be achieved by pursuing the ‘open’ agenda in government. More often that not, there is a positive and ‘progressive’ slant to the debate – developing a ‘common good’ (for want of a better phrase).
The debates do not crossover in my experience. They have their own pet concepts and specialist terminology, with academics (like me) banging on about ‘dataveillance’, ‘discipline’ and ‘control’, governmentality, and, of course, ‘neoliberalism’; whereas the digital government folk I follow can talk about ‘digital’ and ‘open’ (as nouns), ‘agile‘ and ‘lean‘ (also sometimes nouns) practices. I am not saying any of this is representative, simply pointing out that the kinds of conversation are rather different. Neither of these groupings (as I characterise them) talk about or suggest policy in any detail, which is interesting. Social scientists studying ‘data’ (etc) often discuss methodological technique and diagnose what are perceived to be negative aspects of digital systems, whereas digital government folk are often highlighting progress being made in making ‘public’ data and associated services ‘open’ and more accessible. This may be an issue of ‘methods’. To be (perhaps overly) general – the social scientists I follow do particular kinds of, often, politically inflected research, whereas the digital government folk I follow are attempting to build politically neutral services. So, here, the academics are looking for expressions of power and politics, the digital government folk are attempting to minimise their effects.
We are left with what appears to be an unfortunate gap in a possibly fruitful conversation – there are constructive ways that academic researchers can offer insights into how opaque power structures can operate and, likewise, the digital government folk actually have experience of making complex digital systems for government. At present, in my Twitter stream I see (at best) mutual suspicion and often just totally separate conversations. There are moments though and some academics are clearly engaging albeit ‘critically’, e.g.
— LSE Impact Blog (@LSEImpactBlog) October 26, 2016
I recognise my partiality – that there are more than likely more in-depth conversations going on that I’m missing and I do think there’s some really positive work going on, for example as part of the Programmable City project – for example see the great talk by Sung-Yueh Perng below, that is attempting to look at what it means to build digital public services and the kinds of contributions social scientists (like me – there are lots of other kinds of course!) can make.
I welcome suggestions and comments about this, so please do get in touch.
* I am not claiming that those I follow on Twitter and are pigeonholing with this category are representative in any way, this just works for this broad example.
The excellent artist James Bridle has written something for the New Humanist, which is published on their website, entitled “What’s wrong with big data?” Perhaps he’s been reading Rob Kitchin’s The Data Revolution? 🙂 Anyway, it sort of chimes with my previous post on data debates and with the sense in which the problems Bridle so incisively lays out for the readers of his article are not necessarily practical problems but rather are epistemological problems – they pertain to the ways in which we are asked to make sense of the world…
This belief in the power of data, of technology untrammelled by petty human worldviews, is the practical cousin of more metaphysical assertions. A belief in the unquestionability of data leads directly to a belief in the truth of data-derived assertions. And if data contains truth, then it will, without moral intervention, produce better outcomes. Speaking at Google’s private London Zeitgeist conference in 2013, Eric Schmidt, Google Chairman, asserted that “if they had had cellphones in Rwanda in 1994, the genocide would not have happened.” Schmidt’s claim was that technological visibility – the rendering of events and actions legible to everyone – would change the character of those actions. Not only is this statement historically inaccurate (there was plenty of evidence available of what was occurring during the genocide from UN officials, US satellite photographs and other sources), it’s also demonstrably untrue. Analysis of unrest in Kenya in 2007, when over 1,000 people were killed in ethnic conflicts, showed that mobile phones not only spread but accelerated the violence. But you don’t need to look to such extreme examples to see how a belief in technological determinism underlies much of our thinking and reasoning about the world.
Quantified thinking is the dominant ideology of contemporary life: not just in scientific and computational domains but in government policy, social relations and individual identity. It exists equally in qualified research and subconscious instinct, in the calculations of economic austerity and the determinacy of social media. It is the critical balance on which we have placed our ability to act in the world, while critically mistaking the basis for such actions. “More information” does not produce “more truth”, it endangers it.
You can read the whole article on the New Humanist website.
the work, in general, seems to be quite aloof, or detached, or trying to stay above the fray, to remain non-committal, as though that were the more professional, academic stance to take. All this detachment seems to have produced an upshot that is something like: “with all the new technologies coming into our lives in the past 10 years or so, it is important to think through their implications instead of just adopting them uncritically.”
Perhaps those that do “geography o[f] software/ information/ geodata” would like to respond..(?) For me, I think, there is simply a difference in focus between Purcell’s locating of politics and, for example – his colleague at Washington, Sarah Elwood’s in relation to “geodata” (e.g.), i.e. perhaps the difference between a politics of production as such and a politics of implementation.
Nevertheless, Purcell’s point about commons and peer production in open source software is valid – perhaps those involved in recent conference sessions on geographies of software have addressed these issues in some way? (I don’t know, I wasn’t there…)