Via The Data Justice Lab.
Via Mark Purcell.
“When I am king, you will be first against the wall…“
In an article for The Atlantic Adrienne LaFrance observes that a report by the security firm Imperva suggests that 51.8% of traffic online is bot traffic (by which they mean 51.8% of a sample of traffic [“16.7 billion bot and human visits collected from August 9, 2016 to November 6, 2016”] sent through their global content delivery network “Incapusla”):
Overall, bots—good and bad—are responsible for 52 percent of web traffic, according to a new report by the security firm Imperva, which issues an annual assessment of bot activity online. The 52-percent stat is significant because it represents a tip of the scales since last year’s report, which found human traffic had overtaken bot traffic for the first time since at least 2012, when Imperva began tracking bot activity online. Now, the latest survey, which is based on an analysis of nearly 17 billion website visits from across 100,000 domains, shows bots are back on top. Not only that, but harmful bots have the edge over helper bots, which were responsible for 29 percent and 23 percent of all web traffic, respectively.
LaFrance goes on to cite the marketing director of Imperva (who wants to sell you ‘security’ – he’s in the business of selling data centre services) to observe that:
“The most alarming statistic in this report is also the most persistent trend it observes,” writes Igal Zeifman, Imperva’s marketing director, in a blog post about the research. “For the past five years, every third website visitor was an attack bot.”
How do we judge this report? I find it difficult to know how representative this company’s representation of their data, although they are the purveyor of a ‘global content delivery network’. The numbers seem believable, given how long we’ve been hearing that the majority of traffic is ‘not human’ (e.g. a 2013 article in The Atlantic making a similar point and a 2012 ZDNet article saying the same thing: most web traffic is ‘not human’ and mostly malicious).
The ‘not human’ thing needs to be questioned a bit — yes, it’s not literally the result of a physical action but, then, how much of the activity on the electric grid can be said to be ‘not human’ too? I’d hazard that the majority of that so-called ‘not human’ traffic is under some kind of regular oversight and monitoring – it is, more or less, the expression of deliberative (human) agency. Indeed, to reduce the ‘human’ to what our simian digits can make happen seems ridiculous to me… We need a more expansive understanding of technical (as in technics) agency. We need more nuanced ways to come to terms with the scale and complexity of the ways we, as a species, produce and perform our experiences of everyday life – of what counts as work and the things we take for granted.
Owain Jones blogs about an interesting conference at QMUL in May…
Microsoft Cognitive Services (sounds like something from a Phillip K. Dick novel) have opened up APIs, which you can call on (req. subscription), to outsource forms of machine learning. So, if you want to identify faces in pictures or videos you can call on the “Face API“, for example. Obviously, this is all old news… but, it’s sort of interesting to maybe think about how this foregrounds the homogenisation of process – the apparent ‘power’ of these particular programmes (accessed via their APIs) may be their widespread use.
This might be of further interest when we consider things like the “Emotion API” through which (in line with many other forms of programmatic measure of the display or representation of ’emotion’ or ‘sentiment’) the programme scores a facial expression along several measures”, listed in the free example as: “anger”, “contempt”, “disgust”,” fear, “happiness”, “neutral”, “sadness”, “surprise”. For each image you’ll get a table of scores for each recognised face. Have a play – its beguiling, but of course then perhaps prompts the sorts of questions lots of people have been asking about how ‘affect’ and emotions can get codified (e.g. Massumi) and the politics and ethics of the ‘algorithms’ and such like that do these things (e.g. Beer).
I am probably late to all of this and seeing significance here because it’s relatively novel to me (not the tech itself but the ‘easy-to-use’ API structure), nevertheless it seems interesting, to me at least, that these forms of machine learning are being produced as mundane through being made abundant, as apparently straightforward tools. Maybe what I’m picking up on is that these APIs, the programmes they grant access to, are relatively transparent, whereas much of what various ‘algorithm studies’ folk look at is opaque. Microsoft’s Cognitive Services make mundane what, to some, are very political technologies.
This looks interesting… I confess I’ve not listened yet.
Saw this via Stuart Elden.
A hypnotic and really engrossing video that follows the path of the US-Mexico border by Josh Begley at the Intercept in partnership with Field_of_Vision. Read Begley’s article about on The Intercept, which thinks about the video and how/what it represents through Paglen’s idea of “seeing machines”.
Via Stuart Elden.
This is an interesting development, not least in terms of the gradual realignment of EPD in the last five or so years…