A Universe Explodes. A Blockchain book/novel

Thanks to Max Dovey for the tip on this…

This seems interesting as a sort of provocation about what Blockchain says/asks about ownership perhaps, although I’m not overly convinced by the gimmick of changing words such that the readers unravel, or “explode” the book… I wonder whether The Raw Shark Texts  or These Pages Fall Like Ash might be a deeper or maybe I mean more nuanced take on such things… however, I haven’t explored this enough yet and it’s good to see Google doing something like this (I think?!)

Here’s a snip from googler tea uglow’s medium post about this…

It’s a book. On your phone. Well, on the internet. Anyone can read it. It’s 20 pages long. Each page has 128 words, and there are 100 of the ‘books’ that can be ‘owned’ . And no way to see a book that isn’t one of those 100. Each book is unique, with personal dedications, and an accumulation of owners, (not to mention a decreasing number of words) as it is passed on. So it is both a book and an cumulative expression of the erosion of the self and of being rewritten and misunderstood. That is echoed in the narrative: the story is fluid, the transition confusing, the purpose unclear. The book gradually falls apart in more ways than one. It is also kinda geeky.

Choose how you feel, you have seven options

A great piece by Ruben Van de Ven stemming from his artwork of the same name, published on the Institute of Network Culture site. Van de Ven, in a similar vein to Will Davies, deconstructs the logic of ‘affective’ computing, sentiment analysis and their application to what has been termed the ‘attention economy’. The article does a really go job of demonstrating how the knowledge claims, and the epistemologies (perhaps ontologies too), that are at work behind these technologies are (of course) deeply political in their application. Very much worth reading! (snippet below).

 ‘Weeks ago I saw an older woman crying outside my office building as I was walking in. She was alone, and I worried she needed help. I was afraid to ask, but I set my fears aside and walked up to her. She appreciated my gesture, but said she would be fine and her husband would be along soon. With emotion enabled (Augmented Reality), I could have had far more details to help me through the situation. It would have helped me know if I should approach her. It would have also let me know how she truly felt about my talking to her.’

FOREST HANDFORD

This is how Forest Handford, a software developer, outlines his ideal future for a technology that has emerged over the past years. It is known as emotion analysis software, emotion detection, emotion recognition or emotion analytics. One day, Hartford hopes, the software will aid in understanding the other’s genuine, sincere, yet unspoken feelings (‘how she truly felt’). Technology will guide us through a landscape of emotions, like satellite navigation technologies guide us to destinations unknown to us: we blindly trust the route that is plotted out for us. But in a world of digitized emotions, what does it mean to feel 63% surprised and 54% joyful?

Please take the time to read the whole article.

Reblog> Workshop on Security and the Political Turn in the Philosophy of Technologies

An interesting event blogged by Peter-Paul Verbeek:

Workshop ‘Security and the Political Turn in the Philosophy of Technologies’, University of Twente | DesignLab, March 10 2017. How to understand the political significance of things? And how to deal with the politics of technology in a responsible way? Ever since Langdon Winner claimed in the early 1980s that “artifacts have politics”, these questions have been puzzling philosophers and ethicists of technology. Technologies are not just instruments for humans to do politics but actively shape politics themselves. In this workshop we will explore various dimensions of this political role of technologies, especially with regards to security, citizenship in a technological world, and the role of social media and ‘fake news’ in contemporary democracy.

Speakers include:

  • Babette Babich (Fordham)
  • Robin James (UNCC),
  • Laura Fichtner (TUD)
  • Wolter Pieters (TUD)
  • Melis Bas (UT)
  • Jonne Hoek (UT)
  • Philip Brey (UT)
  • Nolen Gertz (UT)
  • Michael Nagenborg (UT)
  • Peter-Paul Verbeek (UT)

The workshop is sponsored by the 4TU.Ethics working group on “Risk, Safety, and Security.”

“Do you hear voices? You do. So you are possessed” – @mikedotphillips talk @exetergeography

I’m really pleased to share that Prof. Mike Phillips (i-DAT, Plymouth) will be speaking  next week as part of the Exeter Geography seminar series. Mike is a founder of the Institute of Digital Art and Technology and one of the founders of the undergraduate programme I studied MediaLab Arts, which is now called  Digital Media Design.

Details: Thursday 16th March, 12:30: Amory 417. All welcome!

Reblog> Book Launch: Playful Mapping in the Digital Age

Via Institute of Network Cultures.

Generally wary of anything with “digital age” in the title, but good to see some geographers featuring in ‘digital cultures’ type networks…

Book Launch Playful Mapping in the Digital Age

MONDAY 13TH OF MARCH, 17:00 – 18:15 @SPUI25, AMSTERDAM.

With keynote lectures by Michiel de Lange and, Emma Fraser and Clancy Wilmott.

From Mah-Jong to the introduction of Prussian war-games, through to the emergence of location-based play: maps and play share a long and diverse history. In this programme, we will launch the book Playful Mapping in The Digital Age, which shows how playing and mapping can be liberating, dangerous, subversive and, performative.

>> Sign up here.

Playful Mapping in the Digital Age shows how mapping and playing unfold in the digital age and in which ways the relations between these apparently separate tropes are increasingly woven together. Fluid networks of interaction have encouraged a proliferation of hybrid forms of mapping and playing. A rich plethora of contemporary case-studies, ranging from fieldwork, golf, activism and automotive navigation to pervasive and desktop-based games, emphasizes this trend. Examining these cases shows how mapping and playing can form productive synergies, but also encourages new ways of being, knowing and shaping our everyday lives. This afternoon, the Members of the Playful Mapping Collective explore how play can be more than just an object or practice, and instead focus on its potential as a method for understanding maps and spatiality.

Echoborg

Former colleagues of mine at UWE are developing an interesting project, which you may have seen/heard about through the BBC’s Click programme, called Echoborg.

An echoborg is a hybrid agent composed of the body of a real person and the “mind” (or, rather, the words) of a conversational agent; the words the echoborg speaks are determined by the conversational agent, transmitted to the person via a covert audio-relay apparatus, and articulated by the person through speech shadowing[1].

Recently, the project team have demoed the project as part of an AHRC-funded network on Automation Anxiety and have written this up on the project website, here’s a snippet – it sounds like it is really compelling (I’ve not seen this in action):

Four people were interviewed by the AI which increasingly displayed an interest in eliciting help to reprogram itself. Proceedings were visible on a projector screen and the ‘audience’ of applicants gradually began to discuss the situation of the Echoborg and how to change it. At a certain point their reflections passed a threshold and the group fired into collective action, experimenting with various methods to bring the situation to a head in some way. The lively inventiveness of the group and the individual interviewees went a long way to confirming the interactive potential of this format of the work. It also gave Rik and Phil much to work with in considering the further development of the AI/Chatbot, the restricted delivery of narrative by the human Echoborg and the staging. This event also trialled a secondary, higher level, Echoborg character as part of the slow process of unfolding the potential for this Echoborg recruitment event to be a disruptive and thought and emotion provoking experience for all players.

Notes

  1. Corti, Kevin and Gillespie, Alex (2015) Offscreen and in the chair next to you: conversational agents speaking through actual human bodies. Lecture Notes in Computer Science, 9238 . pp. 405-417.

Reblog> Social Justice in an Age of Datafication: Launch of the Data Justice Lab

Via The Data Justice Lab.

Social Justice in an Age of Datafication: Launch of the Data Justice Lab

The Data Justice Lab will be officially launched on Friday, 17 March 2017. Join us for the launch event at Cardiff University’s School of Journalism, Media and Cultural Studies (JOMEC) at 4pm. Three international speakers will discuss the challenges of data justice.

The event is free but requires pre-booking at https://www.eventbrite.com/e/social-justice-in-an-age-of-datafication-launching-the-data-justice-lab-tickets-31849002223

Data Justice Lab — Launch Event — Friday 17 March 4pm — Cardiff University

Our financial transactions, communications, movements, relationships, and interactions with government and corporations all increasingly generate data that are used to profile and sort groups and individuals. These processes can affect both individuals as well as entire communities that may be denied services and access to opportunities, or wrongfully targeted and exploited. In short, they impact on our ability to participate in society. The emergence of this data paradigm therefore introduces a particular set of power dynamics requiring investigation and critique.

The Data Justice Lab is a new space for research and collaboration at Cardiff University that has been established to examine the relationship between datafication and social justice. With this launch event, we ask: What does social justice mean in age of datafication? How are data-driven processes impacting on certain communities? In what way does big data change our understanding of governance and politics? And what can we do about it?

We invite you to come and participate in this important discussion. We will be joined by the following keynote speakers:

Virginia Eubanks (New America), Malavika Jayaram (Digital Asia Hub), and Steven Renderos (Center for Media Justice).

Virginia Eubanks is the author of Digital Dead End: Fighting for Social Justice in the Information Age (MIT Press, 2011) and co-editor, with Alethia Jones, of Ain’t Gonna Let Nobody Turn Me Around: Forty Years of Movement Building with Barbara Smith (SUNY Press, 2014). She is also the cofounder of Our Knowledge, Our Power (OKOP), a grassroots economic justice and welfare rights organization. Professor Eubanks is currently working on her third book, Digital Poorhouse, for St. Martin’s Press. In it, she examines how new data-driven systems regulate and discipline the poor in the United States. She is a Fellow at New America, a Washington, D.C. think tank and the recipient of a three-year research grant from the Digital Trust Foundation (with Seeta Peña Gangadharan and Joseph Turow) to explore the meaning of digital privacy and data justice in marginalized communities.

Malavika Jayaram is the Executive Director of the Digital Asia Hub in Hong Kong. Previously she was a Fellow at the Berkman Klein Center for Internet & Society at Harvard University, where she focused on privacy, identity, biometrics and data ethics. She worked at law firms in India and the UK, and she was voted one of India’s leading lawyers. She is Adjunct Faculty at Northwestern University and a Fellow with the Centre for Internet & Society, India, and she is on the Advisory Board of the Electronic Privacy Information Center (EPIC).

Steven Renderos is Organizing Director at the Center for Media Justice. With over 10 years of organizing experience Steven has been involved in campaigns to lower the cost of prison phone calls, preserving the Open Internet, and expanding community owned radio stations. Steven previously served as Project Coordinator of the Minnesotano Media Empowerment Project, an initiative focused on improving the quality and quantity of media coverage and representation of Latinos in Minnesota. He currently serves on the boards of Organizing Apprenticeship Project and La Asamblea de Derechos Civiles. Steven (aka DJ Ren) also hosts a show called Radio Pocho at a community radio station and spins at venues in NYC.

The event will be followed by a reception.

The internet is mostly bots(?)

When I am king, you will be first against the wall…

In an article for The Atlantic Adrienne LaFrance observes that a report by the security firm Imperva suggests that 51.8% of traffic online is bot traffic (by which they mean 51.8% of a sample of traffic [“16.7 billion bot and human visits collected from August 9, 2016 to November 6, 2016”] sent through their global content delivery network “Incapusla”):

Overall, bots—good and bad—are responsible for 52 percent of web traffic, according to a new report by the security firm Imperva, which issues an annual assessment of bot activity online. The 52-percent stat is significant because it represents a tip of the scales since last year’s report, which found human traffic had overtaken bot traffic for the first time since at least 2012, when Imperva began tracking bot activity online. Now, the latest survey, which is based on an analysis of nearly 17 billion website visits from across 100,000 domains, shows bots are back on top. Not only that, but harmful bots have the edge over helper bots, which were responsible for 29 percent and 23 percent of all web traffic, respectively.

LaFrance goes on to cite the marketing director of Imperva (who wants to sell you ‘security’ – he’s in the business of selling data centre services) to observe that:

“The most alarming statistic in this report is also the most persistent trend it observes,” writes Igal Zeifman, Imperva’s marketing director, in a blog post about the research. “For the past five years, every third website visitor was an attack bot.”

How do we judge this report? I find it difficult to know how representative this company’s representation of their data, although they are the purveyor of a ‘global content delivery network’. The numbers seem believable, given how long we’ve been hearing that the majority of traffic is ‘not human’ (e.g. a 2013 article in The Atlantic making a similar point and a 2012 ZDNet article saying the same thing: most web traffic is ‘not human’ and mostly malicious).

The ‘not human’ thing needs to be questioned a bit — yes, it’s not literally the result of a physical action but, then, how much of the activity on the electric grid can be said to be ‘not human’ too? I’d hazard that the majority of that so-called ‘not human’ traffic is under some kind of regular oversight and monitoring – it is, more or less, the expression of deliberative (human) agency. Indeed, to reduce the ‘human’ to what our simian digits can make happen seems ridiculous to me… We need a more expansive understanding of technical (as in technics) agency. We need more nuanced ways to come to terms with the scale and complexity of the ways we, as a species, produce and perform our experiences of everyday life – of what counts as work and the things we take for granted.

Microsoft Cognitive Services

Microsoft Cognitive Services (sounds like something from a Phillip K. Dick novel) have opened up APIs, which you can call on (req. subscription), to outsource forms of machine learning. So, if you want to identify faces in pictures or videos you can call on the “Face API“, for example. Obviously, this is all old news… but, it’s sort of interesting to maybe think about how this foregrounds the homogenisation of process – the apparent ‘power’ of these particular programmes (accessed via their APIs) may be their widespread use.

This might be of further interest when we consider things like the “Emotion API” through which (in line with many other forms of programmatic measure of the display or representation of ’emotion’ or ‘sentiment’) the programme scores a facial expression along several measures”, listed in the free example as: “anger”, “contempt”, “disgust”,” fear, “happiness”, “neutral”, “sadness”, “surprise”. For each image you’ll get a table of scores for each recognised face. Have a play – its beguiling, but of course then perhaps prompts the sorts of questions lots of people have been asking about how ‘affect’ and emotions can get codified (e.g. Massumi) and the politics and ethics of the ‘algorithms’ and such like that do these things (e.g. Beer).

I am probably late to all of this and seeing significance here because it’s relatively novel to me (not the tech itself but the ‘easy-to-use’ API structure), nevertheless it seems interesting, to me at least, that these forms of machine learning are being produced as mundane through being made abundant, as apparently straightforward tools. Maybe what I’m picking up on is that these APIs, the programmes they grant access to, are relatively transparent, whereas much of what various ‘algorithm studies’ folk look at is opaque.  Microsoft’s Cognitive Services make mundane what, to some, are very political technologies.