From the MIT Tech Review:
Well that’s alright then! 😉
From the MIT Tech Review:
Well that’s alright then! 😉
Image credit: Mike Duggan.
At the RGS-IBG Annual International Conference 2017, co-originator of the Museum of Contemporary Commodities (MoCC) Paula Crutchlow and I staged a conversation with Mikayla the MoCC guide, a hacked ‘My Cayla Doll’. This was part of two sessions that capped off the presence of MoCC at the RGS-IBG and was performed alongside a range of other provocations on the theme(s) of ‘data-place-trade-value’. The doll was only mildly disobedient and it was fun to be able to show the subversion of an object of commercial surveillance in a playful way. Below is the visuals that displayed during the conversation, with additional sound…
For more, please do go and read Paula’s excellent blogpost about Mikayla on the MoCC website.
This is good. Via dmf.
Trevor Paglen on ‘autonomous hypernormal mega-meta-realism’ (probably a nod to Curtis there). An entertaining brief talk about ‘AI’ visual recognition systems and their aesthetics.
(I don’t normally hold with laughing at your own gags but Paglen says some interesting things here – expanded upon in this piece (‘Invisible Images: Your pictures are looking at you’) and this artwork – Sight Machines [see below]).
Relatively amusing but also disquieting…
The original CIA report:
It’s interesting to compare competing interpretations of the same ‘vision’ for our near-future everyday media experience. They more or less circle around a series of themes that have been a staple of science fiction for some time: media are in the everyday environment and they respond to us, to varying degrees personally.
On the one-hand some tech enthusiasts/developers present ideas such as “responsive media“, a vision put forward by a former head of ubiquitous computing at Xerox PARC, Bo Begole. On the other hand, sceptics have, for quite some time, presented us with dystopian and/or ‘critical’ reflections on the kinds of ethical and political(economic) ills such ideas might mete out upon us (more often than not from a broadly Marxian perspective), recently expressed in Adam Greenfield’s op-ed for the Graun (publicising his new book “Radical Technologies”).
It’s not like there aren’t plenty of start-ups, and bigger companies (Begole now works for Huawei), trying to more-or-less make the things that science fiction books and films (often derived in some way from Phillip K Dick’s oeuvre) present as insidious and nightmarish. Here I can unfairly pick upon two quick examples: the Channel 4 “world’s first personalised advert” (see the video above) and OfferMoments:
While it may be true that many new inventors are subconsciously inspired by the science fiction of their childhoods, this form of inspiration is hardly seen in the world of outdoor media. Not so for OfferMoments – a company offering facial recognition-powered, programmatically-sold billboard tech directly inspired by the 2002 thriller, Minority Report.
I’ve discussed this in probably too-prosaic terms as a ‘politics of anticipation’, but this, by Audrey Watters (originally about EdTech), seems pretty incisive to me:
if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.
I have come to think this has produced a kind of orientation towards particular ideas and ideals around automation, which I’ve variously been discussing (in the brief moments in which I manage to do research) as an ‘algorithmic’ and more recently an ‘automative‘ imagination (in the manner in which we, geographers, talk about a ‘geographical imagination’).
We need to ask what would data capture and management look like if it is guided by a children’s framework such as this one developed here by Sonia Livingstone and endorsed by the Children’s Commissioner here. Perhaps only companies that complied with strong security and anonymisation procedures would be licenced to trade in UK? Given the financial drivers at work, an ideal solution would possibly make better regulation a commerical incentive. We will be exploring these and other similar questions that emerge over the coming months.
Here’s an exercise to do, as a non-specialist, for yourself or maybe as part of a classroom activity: discuss what Facebook (data brokers, credit checkers etc etc.) might know about me/us/you, how accurate the data/information might be, and what that means to our lives.
One of the persistent themes of how we tell stories about the ‘information society’, ‘big data’, corporate surveillance and so on is the extent of the data held about each and every one of us. Lots of stories are told on the back of that and there are, of course, real life consequences to inaccuracies.
Nevertheless, an interesting way of starting the exercise above is to compare and contrast the following two articles:
The exploitation of personal information has become a multi-billion industry. Yet only the tip of the iceberg of today’s pervasive digital tracking is visible; much of it occurs in the background and remains opaque to most of us.
If you like percentages, nearly 50 percent of the data in the report about me was incorrect. Even the zip code listed does not match that of my permanent address in the U.S.; it shows instead the zip code of an apartment where I lived several years ago. Many data points were so out of date as to be useless for marketing–or nefarious–purposes: My occupation is listed as “student”; my net worth does not take into account my really rather impressive student loan debt. And the information that is accurate, including my age and aforementioned net worth (when adjusted for the student debt), is presented in wide ranges.
Of course, it does not matter if the data is correct – those inaccuracies have real-world consequences, and the granularity of the accuracy only matters in certain circumstances. So, thinking about how and why the data captured about us matters, what it might facilitate – allow or prevent us or those around us doing seems like an interesting activity to occupy thirty minutes or so…
An interesting event blogged by Peter-Paul Verbeek: