We need to ask what would data capture and management look like if it is guided by a children’s framework such as this one developed here by Sonia Livingstone and endorsed by the Children’s Commissioner here. Perhaps only companies that complied with strong security and anonymisation procedures would be licenced to trade in UK? Given the financial drivers at work, an ideal solution would possibly make better regulation a commerical incentive. We will be exploring these and other similar questions that emerge over the coming months.
Over on Antipode’s site there’s a blog post about an intervention symposium on “algorithmic governance” brought together by Jeremy Crampton and Andrea Miller, on the back of sessions at the AAG in 2016. It’s good that this is available open access and, I hope, helpful that it maybe puts to bed some of the definition wrangling that has been the fashion. Obviously, a lot draws on the work of geographer Louise Amoore and also of political theorist Antoinette Rouvroy, which is great.
Reading through the overview and skimming the individual papers provokes me to comment that I remain puzzled though by the wider creeping use of an unqualified “non-human” to talk about software and the sociotechnical systems they run/are run on… this seems to play-down precisely the political issues raised in this particular symposium – that the kinds algorithms concerned in this debate are written and maintained by people, they’re not somehow separate or at a distance… It’s also interesting to note that a sizeable chunk of the debates concern ‘data’ but the symposium doesn’t have “data” in the title, but maybe ‘data–’ is passé… 🙂
I’ve copied below the intro to the post, but please check out the whole thing over on Antipode’s site.
This looks interesting (via Programmable City):
The Musée de la Civilisation in Quebec have a exhibition about ancient ‘doubles’ or ‘twins’, as part of which you can submit your photo and a program will match your face with images of statues in the collection.
It’s been in the press and, of course, is ‘just a bit of fun’, but its also sort of interesting to submit images and try and work out how the pattern matching is working – it’s not all that obvious! There’s probably something smart to say about ‘algorithms’ here, but I’ve not had enough sleep… check it out for yourself: Mon Sosie À 2000 Ans.
Here’s me and Battataï:
Songs written by Sony CSL’s “AI”…
From the Sony CSL “flow machines” website:
Flow Machines is a research project funded by the European Research Council (ERC) and coordinated by François Pachet (Sony CSL Paris – UMPC).
The goal of Flow Machines is to research and develop Artificial Intelligence systems able to generate music autonomously or in collaboration with human artists.
We do so by turning music style into a computational object. Musical style can come from individual composers, for example Bach or The Beatles, or a set of different artists, or, of course, the style of the musician who is using the system.
Their “Deep Bach” thing was doing the rounds at the end of last year, so I presume there will be more to come.
Thanks to Max Dovey for the tip on this…
This seems interesting as a sort of provocation about what Blockchain says/asks about ownership perhaps, although I’m not overly convinced by the gimmick of changing words such that the readers unravel, or “explode” the book… I wonder whether The Raw Shark Texts or These Pages Fall Like Ash might be a deeper or maybe I mean more nuanced take on such things… however, I haven’t explored this enough yet and it’s good to see Google doing something like this (I think?!)
Here’s a snip from googler tea uglow’s medium post about this…
It’s a book. On your phone. Well, on the internet. Anyone can read it. It’s 20 pages long. Each page has 128 words, and there are 100 of the ‘books’ that can be ‘owned’ . And no way to see a book that isn’t one of those 100. Each book is unique, with personal dedications, and an accumulation of owners, (not to mention a decreasing number of words) as it is passed on. So it is both a book and an cumulative expression of the erosion of the self and of being rewritten and misunderstood. That is echoed in the narrative: the story is fluid, the transition confusing, the purpose unclear. The book gradually falls apart in more ways than one. It is also kinda geeky.
A group of academics at Newcastle, collectivised under the moniker “the Analogue University” offer an Alex Galloway-like critique of “The Data University” over on the Antipode blog. An interesting read…
In this short intervention, we want to explore the possibilities for a third wave of critique related to the changing nature of academia. More specifically, we argue that we are now witnessing the emergence of the “Data University” where the initial emphasis on the primacy of data collection for auditing and measuring academic work has shifted to data coding itself as the new exchange value at work and productive of new subjectivities and freedoms. This third wave critique requires drawing a schematic line that now takes us beyond the intensification of neo-liberalisation, the internalisation of market values and associated affective structures of feeling to understanding our new digital and big data world. Influenced by Deleuze’s (1992) work on new societies of control, we argue that the genesis of the “Data University” lies in our active desire for data and its potential to mediate human relations and modulate our freedoms. This is absolutely central to our schematic for a third wave of critique: compared to older disciplinary societies like the school or prison institution (see below), today individuals both desire and are controlled through the active generation of proliferating data streams.
A great piece by Ruben Van de Ven stemming from his artwork of the same name, published on the Institute of Network Culture site. Van de Ven, in a similar vein to Will Davies, deconstructs the logic of ‘affective’ computing, sentiment analysis and their application to what has been termed the ‘attention economy’. The article does a really go job of demonstrating how the knowledge claims, and the epistemologies (perhaps ontologies too), that are at work behind these technologies are (of course) deeply political in their application. Very much worth reading! (snippet below).
I particularly like these bits copied below, but please read the whole post.
…I imagine what it may be like to arrive on the scene of a driverless car crash, and the kinds of maps I’d draw to understand what happened. Scenario planning is one way in which ‘unthinkable futures’ may be planned for.
The ‘scenario’ is a phenomenon that became prominent during the Korean War, and through the following decades of the Cold War, to allow the US army to plan its strategy in the event of nuclear disaster. Peter Galison describes scenarios as a “literature of future war” “located somewhere between a story outline and ever more sophisticated role-playing war games”, “a staple of the new futurism”. Since then scenario-planning has been adopted by a range of organisations, and features in the modelling of risk and to identify errors. Galison cites the Boston Group as having written a scenario – their very first one- in which feminist epistemologists, historians, and philosophers of science running amok might present a threat to the release of radioactive waste from the Cold War (“A Feminist World, 2091”).
The applications of the Trolley Problem to driverless car crashes are a sort of scenario planning exercise. Now familiar to most readers of mainstream technology reporting, the Trolley problem is presented as a series of hypothetical situations with different outcomes derived from a pitting of consequentialism against deontological ethics. Trolley Problems are constructed as either/or scenarios where a single choice must be made.
What the Trolley Problem scenario and the applications of machine learning in driving suggest is that we’re seeing a shift in how ethics is being constructed: from accounting for crashes after the fact, to pre-empting them (though, the automotive industry has been using computer simulated crash modeling for over twenty years); from ethics that is about values, or reasoning, to ethics as based on datasets of correct responses, and, crucially, of ethics as the outcome of software engineering. Specifically in the context of driverless cars, there is the shift from ethics as a framework of “values for living well and dying well”, as Gregoire Chamayou puts it, to a framework for “killing well”, or ‘necroethics’.
Perhaps the unthinkable scenario to confront is that ethics is not a machine-learned response, nor an end-point, but as a series of socio-technical, technical, human, and post-human relationships, ontologies, and exchanges. These challenging and intriguing scenarios are yet to be mapped.
Coincidentally, in the latest Machine Ethics podcast (which I participated in a while ago), Joanna Bryson discusses these issues about the bases for deriving ethics in relation to AI, which is quite interesting.