Roadside billboards display targeted ads in Russia

racist facial recognition

From the MIT Tech Review:

Moscow Billboard Targets Ads Based on the Car You’re Driving

Targeted advertising is familiar to anyone browsing the Internet. A startup called Synaps Labs has brought it to the physical world by combining high-speed cameras set up a distance ahead of the billboard (about 180 meters) to capture images of cars. Its machine-learning system can recognize in those images the make and model of the cars an advertiser wants to target. A bidding system then selects the appropriate advertising to put on the billboard as that car passes.

Marketing a car on a roadside billboard might seem a logical fit. But how broad could this kind of advertising be? There is a lot an advertiser can tell about you from the car you drive, says Synaps. Indeed, recent research from a group of university researchers and led by Stanford found that—using machine vision and deep learning—analyzing the make, model, and year of vehicles visible in Google Street View could accurately estimate income, race, and education level of a neighborhood’s residents, and even whether a city is likely to vote Democrat or Republican.

As the camera spots a BMW X5 in the third lane, and later a BMW X6 and a Volvo XC60 in the far left lane, the billboard changes to show Jaguar’s new SUV, an ad that’s targeted to those drivers.

Synaps’s business model is to sell its services to the owners of digital billboards. Digital billboard advertising rotates, and more targeted advertising can rotate more often, allowing operators to sell more ads. According to Synaps, a targeted ad shown 8,500 times in one month will reach the same number of targeted drivers (approximately 22,000) as a typical ad shown 55,000 times. The Jaguar campaign paid the billboard operator based on the number of impressions, as Web advertisers do. The traditional billboard-advertising model is priced instead on airtime, similar to TV ads.

In Russia, Synaps expects to be operating on 20 to 50 billboards this year. The company is also planning a test in the U.S. this summer, where there are roughly 7,000 digital billboards, a number growing at 15 percent a year, according to the company. (By contrast, there are 370,000 conventional billboards.) With a row of digital billboards along a road, they could roll the ads as the cars move along, making billboard advertising more like the storytelling style of television and the Internet, says Synaps’s cofounder Alex Pustov.

There are limits to what the company will use its cameras for. Synaps won’t sell data on individual drivers, though the company is interested in possibly using aggregate traffic patterns for services like predictive traffic analysis and the sociodemographic analysis of commuters versus residents in an area, traffic emissions tracking, or other uses.

Out of safety concerns, license plate data is encrypted, and the company says it will comply with local regulations limiting the time this kind of data can be stored, as well.

Well that’s alright then! 😉

Talking with Mikayla

Talking with Mikayla, the Museum of Contemporary Commodities GuideImage credit: Mike Duggan.

At the RGS-IBG Annual International Conference 2017, co-originator of the Museum of Contemporary Commodities (MoCC) Paula Crutchlow and I staged a conversation with Mikayla the MoCC guide, a hacked ‘My Cayla Doll’. This was part of two sessions that capped off the presence of MoCC at the RGS-IBG and was performed alongside a range of other provocations on the theme(s) of ‘data-place-trade-value’. The doll was only mildly disobedient and it was fun to be able to show the subversion of an object of commercial surveillance in a playful way. Below is the visuals that displayed during the conversation, with additional sound…

For more, please do go and read Paula’s excellent blogpost about Mikayla on the MoCC website.

“Invisible Images: Ethics of Autonomous Vision Systems” Trevor Paglen at “AI Now” (video)

racist facial recognition

Via Data & Society / AI Now.

Trevor Paglen on ‘autonomous hypernormal mega-meta-realism’ (probably a nod to Curtis there). An entertaining brief talk about ‘AI’ visual recognition systems and their aesthetics.

(I don’t normally hold with laughing at your own gags but Paglen says some interesting things here – expanded upon in this piece (‘Invisible Images: Your pictures are looking at you’) and this artwork – Sight Machines [see below]).

Responsive media

personal media

It’s interesting to compare competing interpretations of the same ‘vision’ for our near-future everyday media experience. They more or less circle around a series of themes that have been a staple of science fiction for some time: media are in the everyday environment and they respond to us, to varying degrees personally.

On the one-hand some tech enthusiasts/developers present ideas such as “responsive media“, a vision put forward by a former head of ubiquitous computing at Xerox PARC, Bo Begole. On the other hand, sceptics have, for quite some time, presented us with dystopian and/or ‘critical’ reflections on the kinds of ethical and political(economic) ills such ideas might mete out upon us (more often than not from a broadly Marxian perspective), recently expressed in Adam Greenfield’s op-ed for the Graun (publicising his new book “Radical Technologies”).

It’s not like there aren’t plenty of start-ups, and bigger companies (Begole now works for Huawei), trying to more-or-less make the things that science fiction books and films (often derived in some way from Phillip K Dick’s oeuvre) present as insidious and nightmarish. Here I can unfairly pick upon two quick examples: the Channel 4 “world’s first personalised advert” (see the video above) and OfferMoments:

While it may be true that many new inventors are subconsciously inspired by the science fiction of their childhoods, this form of inspiration is hardly seen in the world of outdoor media. Not so for OfferMoments – a company offering facial recognition-powered, programmatically-sold billboard tech directly inspired by the 2002 thriller, Minority Report.

I’ve discussed this in probably too-prosaic terms as a ‘politics of anticipation’, but this, by Audrey Watters (originally about EdTech), seems pretty incisive to me:

if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

I have come to think this has produced a kind of orientation towards particular ideas and ideals around automation, which I’ve variously been discussing (in the brief moments in which I manage to do research) as an ‘algorithmic’ and more recently an ‘automativeimagination (in the manner in which we, geographers, talk about a ‘geographical imagination’).

How and why is children’s digital data being harvested?

Nice post by Huw Davies, which is worth a quick read (its fairly short)…

We need to ask what would data capture and management look like if it is guided by a children’s framework such as this one developed here by Sonia Livingstone and endorsed by the Children’s Commissioner here. Perhaps only companies that complied with strong security and anonymisation procedures would be licenced to trade in UK? Given the financial drivers at work, an ideal solution would possibly make better regulation a commerical incentive. We will be exploring these and other similar questions that emerge over the coming months.

Through a data broker darkly…

Here’s an exercise to do, as a non-specialist, for yourself or maybe as part of a classroom activity: discuss what Facebook (data brokers, credit checkers etc etc.) might know about me/us/you, how accurate the data/information might be, and what that means to our lives.

One of the persistent themes of how we tell stories about the ‘information society’, ‘big data’, corporate surveillance and so on is the extent of the data held about each and every one of us. Lots of stories are told on the back of that and there are, of course, real life consequences to inaccuracies.

Nevertheless, an interesting way of starting the exercise above is to compare and contrast the following two articles:

Corporate Surveillance in Everyday Life:

The exploitation of personal information has become a multi-billion industry. Yet only the tip of the iceberg of today’s pervasive digital tracking is visible; much of it occurs in the background and remains opaque to most of us.

I Bought a Report on Everything That’s Known About Me Online:

If you like percentages, nearly 50 percent of the data in the report about me was incorrect. Even the zip code listed does not match that of my permanent address in the U.S.; it shows instead the zip code of an apartment where I lived several years ago. Many data points were so out of date as to be useless for marketing–or nefarious–purposes: My occupation is listed as “student”; my net worth does not take into account my really rather impressive student loan debt. And the information that is accurate, including my age and aforementioned net worth (when adjusted for the student debt), is presented in wide ranges.

Of course, it does not matter if the data is correct – those inaccuracies have real-world consequences, and the granularity of the accuracy only matters in certain circumstances. So, thinking about how and why the data captured about us matters, what it might facilitate – allow or prevent us or those around us doing seems like an interesting activity to occupy thirty minutes or so…

Reblog> Workshop on Security and the Political Turn in the Philosophy of Technologies

An interesting event blogged by Peter-Paul Verbeek:

Workshop ‘Security and the Political Turn in the Philosophy of Technologies’, University of Twente | DesignLab, March 10 2017. How to understand the political significance of things? And how to deal with the politics of technology in a responsible way? Ever since Langdon Winner claimed in the early 1980s that “artifacts have politics”, these questions have been puzzling philosophers and ethicists of technology. Technologies are not just instruments for humans to do politics but actively shape politics themselves. In this workshop we will explore various dimensions of this political role of technologies, especially with regards to security, citizenship in a technological world, and the role of social media and ‘fake news’ in contemporary democracy.

Speakers include:

  • Babette Babich (Fordham)
  • Robin James (UNCC),
  • Laura Fichtner (TUD)
  • Wolter Pieters (TUD)
  • Melis Bas (UT)
  • Jonne Hoek (UT)
  • Philip Brey (UT)
  • Nolen Gertz (UT)
  • Michael Nagenborg (UT)
  • Peter-Paul Verbeek (UT)

The workshop is sponsored by the 4TU.Ethics working group on “Risk, Safety, and Security.”