‘Pax Technica’ Talking Politics, Naughton & Howard

Nest - artwork by Jakub Geltner

This episode of the ‘Talking Politics‘ podcast is a conversation between LRB journalist John Naughton and the Oxford Internet Institute’s Professor Phillip Howard ranging over a number of issues but largely circling around the political issues that may emerge from ‘Internets of Things’ (the plural is important in the argument) that are discussed in Howard’s book ‘Pax Technica‘. Worth a listen if you have time…

One of the slightly throw away bits of the conversation, which didn’t concern the tech, that interested me was when Howard comments on the kind of book Pax Technica is – a ‘popular’ rather than ‘scholarly’ book and how that had led to a sense of dismissal by some. It seems nuts (to me, anyway) when we’re all supposed to be engaging in ‘impact’, ‘knowledge exchange’ and so on that opting to write a £17 paperback that opens out debate, instead of a £80+ ‘scholarly’ hardback, is frowned upon. I mean I understand some of the reasons why but still…

Reblog> New paper: A smart place to work? Big data systems, labour, control, and modern retail stores

Gilbreth motion studies light painting

From the Programmable City team, looks interesting:

New paper: A smart place to work? Big data systems, labour, control, and modern retail stores

The modern retail store is a complex coded assemblage and data-intensive environment, its operations and management mediated by a number of interlinked big data systems. This paper draws on an ethnography of a superstore in Ireland to examine how these systems modulate the functioning of the store and working practices of employees. It was found that retail work involves a continual movement between a governance regime of control reliant on big data systems which seek to regulate and harnesses formal labour and automation into enterprise planning, and a disciplinary regime that deals with the symbolic, interactive labour that workers perform and acts as a reserve mode of governmentality if control fails. This continual movement is caused by new systems of control being open to vertical and horizontal fissures. While retail functions as a coded assemblage of control, systems are too brittle to sustain the code/space and governmentality desired.

Access the PDF here

Roadside billboards display targeted ads in Russia

racist facial recognition

From the MIT Tech Review:

Moscow Billboard Targets Ads Based on the Car You’re Driving

Targeted advertising is familiar to anyone browsing the Internet. A startup called Synaps Labs has brought it to the physical world by combining high-speed cameras set up a distance ahead of the billboard (about 180 meters) to capture images of cars. Its machine-learning system can recognize in those images the make and model of the cars an advertiser wants to target. A bidding system then selects the appropriate advertising to put on the billboard as that car passes.

Marketing a car on a roadside billboard might seem a logical fit. But how broad could this kind of advertising be? There is a lot an advertiser can tell about you from the car you drive, says Synaps. Indeed, recent research from a group of university researchers and led by Stanford found that—using machine vision and deep learning—analyzing the make, model, and year of vehicles visible in Google Street View could accurately estimate income, race, and education level of a neighborhood’s residents, and even whether a city is likely to vote Democrat or Republican.

As the camera spots a BMW X5 in the third lane, and later a BMW X6 and a Volvo XC60 in the far left lane, the billboard changes to show Jaguar’s new SUV, an ad that’s targeted to those drivers.

Synaps’s business model is to sell its services to the owners of digital billboards. Digital billboard advertising rotates, and more targeted advertising can rotate more often, allowing operators to sell more ads. According to Synaps, a targeted ad shown 8,500 times in one month will reach the same number of targeted drivers (approximately 22,000) as a typical ad shown 55,000 times. The Jaguar campaign paid the billboard operator based on the number of impressions, as Web advertisers do. The traditional billboard-advertising model is priced instead on airtime, similar to TV ads.

In Russia, Synaps expects to be operating on 20 to 50 billboards this year. The company is also planning a test in the U.S. this summer, where there are roughly 7,000 digital billboards, a number growing at 15 percent a year, according to the company. (By contrast, there are 370,000 conventional billboards.) With a row of digital billboards along a road, they could roll the ads as the cars move along, making billboard advertising more like the storytelling style of television and the Internet, says Synaps’s cofounder Alex Pustov.

There are limits to what the company will use its cameras for. Synaps won’t sell data on individual drivers, though the company is interested in possibly using aggregate traffic patterns for services like predictive traffic analysis and the sociodemographic analysis of commuters versus residents in an area, traffic emissions tracking, or other uses.

Out of safety concerns, license plate data is encrypted, and the company says it will comply with local regulations limiting the time this kind of data can be stored, as well.

Well that’s alright then! 😉

Talking with Mikayla

Talking with Mikayla, the Museum of Contemporary Commodities GuideImage credit: Mike Duggan.

At the RGS-IBG Annual International Conference 2017, co-originator of the Museum of Contemporary Commodities (MoCC) Paula Crutchlow and I staged a conversation with Mikayla the MoCC guide, a hacked ‘My Cayla Doll’. This was part of two sessions that capped off the presence of MoCC at the RGS-IBG and was performed alongside a range of other provocations on the theme(s) of ‘data-place-trade-value’. The doll was only mildly disobedient and it was fun to be able to show the subversion of an object of commercial surveillance in a playful way. Below is the visuals that displayed during the conversation, with additional sound…

For more, please do go and read Paula’s excellent blogpost about Mikayla on the MoCC website.

“Invisible Images: Ethics of Autonomous Vision Systems” Trevor Paglen at “AI Now” (video)

racist facial recognition

Via Data & Society / AI Now.

Trevor Paglen on ‘autonomous hypernormal mega-meta-realism’ (probably a nod to Curtis there). An entertaining brief talk about ‘AI’ visual recognition systems and their aesthetics.

(I don’t normally hold with laughing at your own gags but Paglen says some interesting things here – expanded upon in this piece (‘Invisible Images: Your pictures are looking at you’) and this artwork – Sight Machines [see below]).

Responsive media

personal media

It’s interesting to compare competing interpretations of the same ‘vision’ for our near-future everyday media experience. They more or less circle around a series of themes that have been a staple of science fiction for some time: media are in the everyday environment and they respond to us, to varying degrees personally.

On the one-hand some tech enthusiasts/developers present ideas such as “responsive media“, a vision put forward by a former head of ubiquitous computing at Xerox PARC, Bo Begole. On the other hand, sceptics have, for quite some time, presented us with dystopian and/or ‘critical’ reflections on the kinds of ethical and political(economic) ills such ideas might mete out upon us (more often than not from a broadly Marxian perspective), recently expressed in Adam Greenfield’s op-ed for the Graun (publicising his new book “Radical Technologies”).

It’s not like there aren’t plenty of start-ups, and bigger companies (Begole now works for Huawei), trying to more-or-less make the things that science fiction books and films (often derived in some way from Phillip K Dick’s oeuvre) present as insidious and nightmarish. Here I can unfairly pick upon two quick examples: the Channel 4 “world’s first personalised advert” (see the video above) and OfferMoments:

While it may be true that many new inventors are subconsciously inspired by the science fiction of their childhoods, this form of inspiration is hardly seen in the world of outdoor media. Not so for OfferMoments – a company offering facial recognition-powered, programmatically-sold billboard tech directly inspired by the 2002 thriller, Minority Report.

I’ve discussed this in probably too-prosaic terms as a ‘politics of anticipation’, but this, by Audrey Watters (originally about EdTech), seems pretty incisive to me:

if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

I have come to think this has produced a kind of orientation towards particular ideas and ideals around automation, which I’ve variously been discussing (in the brief moments in which I manage to do research) as an ‘algorithmic’ and more recently an ‘automativeimagination (in the manner in which we, geographers, talk about a ‘geographical imagination’).

How and why is children’s digital data being harvested?

Nice post by Huw Davies, which is worth a quick read (its fairly short)…

We need to ask what would data capture and management look like if it is guided by a children’s framework such as this one developed here by Sonia Livingstone and endorsed by the Children’s Commissioner here. Perhaps only companies that complied with strong security and anonymisation procedures would be licenced to trade in UK? Given the financial drivers at work, an ideal solution would possibly make better regulation a commerical incentive. We will be exploring these and other similar questions that emerge over the coming months.

Through a data broker darkly…

Here’s an exercise to do, as a non-specialist, for yourself or maybe as part of a classroom activity: discuss what Facebook (data brokers, credit checkers etc etc.) might know about me/us/you, how accurate the data/information might be, and what that means to our lives.

One of the persistent themes of how we tell stories about the ‘information society’, ‘big data’, corporate surveillance and so on is the extent of the data held about each and every one of us. Lots of stories are told on the back of that and there are, of course, real life consequences to inaccuracies.

Nevertheless, an interesting way of starting the exercise above is to compare and contrast the following two articles:

Corporate Surveillance in Everyday Life:

The exploitation of personal information has become a multi-billion industry. Yet only the tip of the iceberg of today’s pervasive digital tracking is visible; much of it occurs in the background and remains opaque to most of us.

I Bought a Report on Everything That’s Known About Me Online:

If you like percentages, nearly 50 percent of the data in the report about me was incorrect. Even the zip code listed does not match that of my permanent address in the U.S.; it shows instead the zip code of an apartment where I lived several years ago. Many data points were so out of date as to be useless for marketing–or nefarious–purposes: My occupation is listed as “student”; my net worth does not take into account my really rather impressive student loan debt. And the information that is accurate, including my age and aforementioned net worth (when adjusted for the student debt), is presented in wide ranges.

Of course, it does not matter if the data is correct – those inaccuracies have real-world consequences, and the granularity of the accuracy only matters in certain circumstances. So, thinking about how and why the data captured about us matters, what it might facilitate – allow or prevent us or those around us doing seems like an interesting activity to occupy thirty minutes or so…