How and why is children’s digital data being harvested?

Nice post by Huw Davies, which is worth a quick read (its fairly short)…

We need to ask what would data capture and management look like if it is guided by a children’s framework such as this one developed here by Sonia Livingstone and endorsed by the Children’s Commissioner here. Perhaps only companies that complied with strong security and anonymisation procedures would be licenced to trade in UK? Given the financial drivers at work, an ideal solution would possibly make better regulation a commerical incentive. We will be exploring these and other similar questions that emerge over the coming months.

“algorithmic governance” – recent ‘algorithm’ debates in geography-land

Over on Antipode’s site there’s a blog post about an intervention symposium on “algorithmic governance” brought together by Jeremy Crampton and Andrea Miller, on the back of sessions at the AAG in 2016. It’s good that this is available open access and, I hope, helpful that it maybe puts to bed some of the definition wrangling that has been the fashion. Obviously, a lot draws on the work of geographer Louise Amoore and also of political theorist Antoinette Rouvroy, which is great.

Reading through the overview and skimming the individual papers provokes me to comment that I remain puzzled though by the wider creeping use of an unqualified “non-human” to talk about software and the sociotechnical systems they run/are run on… this seems to play-down precisely the political issues raised in this particular symposium – that the kinds algorithms concerned in this debate are written and maintained by people, they’re not somehow separate or at a distance… It’s also interesting to note that a sizeable chunk of the debates concern ‘data’ but the symposium doesn’t have “data” in the title, but maybe ‘data–’ is passé… 🙂

I’ve copied below the intro to the post, but please check out the whole thing over on Antipode’s site.

Intervention Symposium: “Algorithmic Governance”; organised by Jeremy Crampton and Andrea Miller

The following essays first came together at the 2016 AAG Annual Meeting in San Francisco. Jeremy Crampton (Professor of Geography at the University of Kentucky) and Andrea Miller (PhD candidate at University of California, Davis) assembled five panellists to discuss what they call algorithmic governance – “the manifold ways that algorithms and code/space enable practices of governance that ascribes risk, suspicion and positive value in geographic contexts.”

Among other things, panellists explored how we can best pay attention to the spaces of governance where algorithms operate, and are contested; the spatial dimensions of the data-driven subject; how modes of algorithmic modulation and control impact understandings of categories such as race and gender; the extent to which algorithms are deterministic, and the spaces of contestation or counter-algorithms; how algorithmic governance inflects and augments practices of policing and militarization; the most productive theoretical tools available for studying algorithmic data; visualizations such as maps being implicated by or for algorithms; and the genealogy of algorithms and other histories of computation.

Three of the panellists plus Andrea and Jeremy present versions of these discussions below, following an introduction to the Intervention Symposium from its guest editors (who Andy and Katherine at Antipode would like to thank for all their work!).

Read the whole post and see the contributions to the symposium on the Antipode site.

Reblog> Workshop: Reshaping Cities through Data and Experiments

This looks interesting (via Programmable City):

Workshop: Reshaping Cities through Data and Experiments

When: 30th May 2017 – 9.30am to 3.30pm
Where: Maynooth University, Iontas Building, Seminar Room 2.31

The “Reshaping Cities through Data and Experiments” workshop is part of the Ulysses research exchange programme jointly funded by Irish Research Council and the Ambassade de France. It is organized in collaboration with researchers from the Centre de Sociologie de l’Innovation (i3-CSI) at the École des Mines in Paris – David Pontille, Félix Talvard, Clément Marquet and Brice Laurent – and researchers from the National Institute for Regional and Spatial Analysis (NIRSA) in Maynooth University, Ireland – Claudio Coletta, Liam Heaphy and Sung-Yueh Perng.

The aim is to initiate a transdisciplinary discussion on the theoretical, methodological and empirical issues related to experimental and data-driven approaches to urban development and living. This conversation is vital in a time when cities are increasingly turning into public-private testbeds and living labs, where urban development projects merge with the design of cyber-infrastructures to test new services and new forms of engagement for urban innovation and economic development. These new forms of interaction between algorithms, planning practices and governance processes raise crucial questions for researchers on how everyday life, civic engagement and urban change are shaped in contemporary cities.

Read the full blogpost on the Programmable City site.

An ancient twin? Facial pattern matching with ancient statues


The Musée de la Civilisation in Quebec have a exhibition about ancient ‘doubles’ or ‘twins’, as part of which you can submit your photo and a program will match your face with images of statues in the collection.

It’s been in the press and, of course, is ‘just a bit of fun’, but its also sort of interesting to submit images and try and work out how the pattern matching is working – it’s not all that obvious! There’s probably something smart to say about ‘algorithms’ here, but I’ve not had enough sleep… check it out for yourself: Mon Sosie À 2000 Ans.

Here’s me and Battataï:

Songs “written by AI” from SonyCSL

Songs written by Sony CSL’s “AI”…

From the Sony CSL “flow machines” website:

Flow Machines is a research project funded by the European Research Council (ERC) and coordinated by François Pachet (Sony CSL Paris – UMPC).

The goal of Flow Machines is to research and develop Artificial Intelligence systems able to generate music autonomously or in collaboration with human artists.
We do so by turning music style into a computational object. Musical style can come from individual composers, for example Bach or The Beatles, or a set of different artists, or, of course, the style of the musician who is using the system.

Their “Deep Bach” thing was doing the rounds at the end of last year, so I presume there will be more to come.

A Universe Explodes. A Blockchain book/novel

Thanks to Max Dovey for the tip on this…

This seems interesting as a sort of provocation about what Blockchain says/asks about ownership perhaps, although I’m not overly convinced by the gimmick of changing words such that the readers unravel, or “explode” the book… I wonder whether The Raw Shark Texts  or These Pages Fall Like Ash might be a deeper or maybe I mean more nuanced take on such things… however, I haven’t explored this enough yet and it’s good to see Google doing something like this (I think?!)

Here’s a snip from googler tea uglow’s medium post about this…

It’s a book. On your phone. Well, on the internet. Anyone can read it. It’s 20 pages long. Each page has 128 words, and there are 100 of the ‘books’ that can be ‘owned’ . And no way to see a book that isn’t one of those 100. Each book is unique, with personal dedications, and an accumulation of owners, (not to mention a decreasing number of words) as it is passed on. So it is both a book and an cumulative expression of the erosion of the self and of being rewritten and misunderstood. That is echoed in the narrative: the story is fluid, the transition confusing, the purpose unclear. The book gradually falls apart in more ways than one. It is also kinda geeky.

“Control, Resistance, and the ‘Data University’: Towards a Third Wave Critique”

A group of academics at Newcastle, collectivised under the moniker “the Analogue University” offer an Alex Galloway-like critique of “The Data University” over on the Antipode blog. An interesting read…

In this short intervention, we want to explore the possibilities for a third wave of critique related to the changing nature of academia. More specifically, we argue that we are now witnessing the emergence of the “Data University” where the initial emphasis on the primacy of data collection for auditing and measuring academic work has shifted to data coding itself as the new exchange value at work and productive of new subjectivities and freedoms. This third wave critique requires drawing a schematic line that now takes us beyond the intensification of neo-liberalisation, the internalisation of market values and associated affective structures of feeling to understanding our new digital and big data world. Influenced by Deleuze’s (1992) work on new societies of control, we argue that the genesis of the “Data University” lies in our active desire for data and its potential to mediate human relations and modulate our freedoms. This is absolutely central to our schematic for a third wave of critique: compared to older disciplinary societies like the school or prison institution (see below), today individuals both desire and are controlled through the active generation of proliferating data streams.

Read the full article.

Choose how you feel, you have seven options

A great piece by Ruben Van de Ven stemming from his artwork of the same name, published on the Institute of Network Culture site. Van de Ven, in a similar vein to Will Davies, deconstructs the logic of ‘affective’ computing, sentiment analysis and their application to what has been termed the ‘attention economy’. The article does a really go job of demonstrating how the knowledge claims, and the epistemologies (perhaps ontologies too), that are at work behind these technologies are (of course) deeply political in their application. Very much worth reading! (snippet below).

 ‘Weeks ago I saw an older woman crying outside my office building as I was walking in. She was alone, and I worried she needed help. I was afraid to ask, but I set my fears aside and walked up to her. She appreciated my gesture, but said she would be fine and her husband would be along soon. With emotion enabled (Augmented Reality), I could have had far more details to help me through the situation. It would have helped me know if I should approach her. It would have also let me know how she truly felt about my talking to her.’

FOREST HANDFORD

This is how Forest Handford, a software developer, outlines his ideal future for a technology that has emerged over the past years. It is known as emotion analysis software, emotion detection, emotion recognition or emotion analytics. One day, Hartford hopes, the software will aid in understanding the other’s genuine, sincere, yet unspoken feelings (‘how she truly felt’). Technology will guide us through a landscape of emotions, like satellite navigation technologies guide us to destinations unknown to us: we blindly trust the route that is plotted out for us. But in a world of digitized emotions, what does it mean to feel 63% surprised and 54% joyful?

Please take the time to read the whole article.

Reblog> Accident tourist – driverless cars and ethics

An interesting and well-written piece over on Cyborgology by Maya from Tactical Tech Collective (amongst many other things!)

I particularly like these bits copied below, but please read the whole post.

Accident Tourist: Driverless car crashes, ethics, machine learning

…I imagine what it may be like to arrive on the scene of  a driverless car crash, and the kinds of maps I’d draw to understand what happened. Scenario planning is one way in which ‘unthinkable futures’ may be planned for.

The ‘scenario’ is a phenomenon that became prominent during the Korean War, and through the following decades of the Cold War, to allow the US army to plan its strategy in the event of nuclear disaster. Peter Galison describes scenarios as a “literature of future war” “located somewhere between a story outline and ever more sophisticated role-playing war games”, “a staple of the new futurism”. Since then scenario-planning has been adopted by a range of organisations, and features in the modelling of risk and to identify errors. Galison cites the Boston Group as having written a scenario – their very first one-  in which feminist epistemologists, historians, and philosophers of science running amok might present a threat to the release of radioactive waste from the Cold War (“A Feminist World, 2091”).

The applications of the Trolley Problem to driverless car crashes are a sort of scenario planning exercise. Now familiar to most readers of mainstream technology reporting, the Trolley problem is presented as a series of hypothetical situations with different outcomes derived from a pitting of consequentialism against deontological ethicsTrolley Problems are constructed as either/or scenarios where a single choice must be made.

[…]

What the Trolley Problem scenario and the applications of machine learning in driving suggest is that we’re seeing a shift in how ethics is being constructed: from accounting for crashes after the fact, to pre-empting them (though, the automotive industry has been using computer simulated crash modeling for over twenty years); from ethics that is about values, or reasoning, to ethics as based on datasets of correct responses, and, crucially, of ethics as the outcome of software engineering. Specifically in the context of driverless cars, there is the shift from ethics as a framework of “values for living well and dying well”, as Gregoire Chamayou puts it, to a framework for “killing well”, or ‘necroethics’.

Perhaps the unthinkable scenario to confront is that  ethics is not a machine-learned response, nor an end-point, but as a series of socio-technical, technical, human, and post-human relationships, ontologies, and exchanges. These challenging and intriguing scenarios are yet to be mapped.

Coincidentally, in the latest Machine Ethics podcast (which I participated in a while ago), Joanna Bryson discusses these issues about the bases for deriving ethics in relation to AI, which is quite interesting.