Reblog> (video): Gillian Rose – Tweeting the Smart City

Smart City visualisation

Via The Programmable City.

Seminar 2 (video): Gillian Rose – Tweeting the Smart City

We are delighted to share the video of our second seminar in our 2017/18 series, entitled Tweeting the Smart City: The Affective Enactments of the Smart City on Social Media given by Professor Gillian Rose from Oxford University on the 26th October 2017 and co-hosted with the Geography Department at Maynooth University.Abstract
Digital technologies of various kinds are now the means through which many cities are made visible and their spatialities negotiated. From casual snaps shared on Instagram to elaborate photo-realistic visualisations, digital technologies for making, distributing and viewing cities are more and more pervasive. This talk will explore some of the implications of that digital mediation of urban spaces. What forms of urban life are being made visible in these digitally mediated cities, and how? Through what configurations of temporality, spatiality and embodiment? And how should that picturing be theorised? Drawing on recent work on the visualisation of so-called ‘smart cities’ on social media, the lecture will suggest the scale and pervasiveness of digital imagery now means that notions of ‘representation’ have to be rethought. Cities and their inhabitants are increasingly mediated through a febrile cloud of streaming image files; as well as representing cities, this cloud also operationalises particular, affective ways of being urban. The lecture will explore some of the implications of this shift for both theory and method as well as critique.

Roadside billboards display targeted ads in Russia

racist facial recognition

From the MIT Tech Review:

Moscow Billboard Targets Ads Based on the Car You’re Driving

Targeted advertising is familiar to anyone browsing the Internet. A startup called Synaps Labs has brought it to the physical world by combining high-speed cameras set up a distance ahead of the billboard (about 180 meters) to capture images of cars. Its machine-learning system can recognize in those images the make and model of the cars an advertiser wants to target. A bidding system then selects the appropriate advertising to put on the billboard as that car passes.

Marketing a car on a roadside billboard might seem a logical fit. But how broad could this kind of advertising be? There is a lot an advertiser can tell about you from the car you drive, says Synaps. Indeed, recent research from a group of university researchers and led by Stanford found that—using machine vision and deep learning—analyzing the make, model, and year of vehicles visible in Google Street View could accurately estimate income, race, and education level of a neighborhood’s residents, and even whether a city is likely to vote Democrat or Republican.

As the camera spots a BMW X5 in the third lane, and later a BMW X6 and a Volvo XC60 in the far left lane, the billboard changes to show Jaguar’s new SUV, an ad that’s targeted to those drivers.

Synaps’s business model is to sell its services to the owners of digital billboards. Digital billboard advertising rotates, and more targeted advertising can rotate more often, allowing operators to sell more ads. According to Synaps, a targeted ad shown 8,500 times in one month will reach the same number of targeted drivers (approximately 22,000) as a typical ad shown 55,000 times. The Jaguar campaign paid the billboard operator based on the number of impressions, as Web advertisers do. The traditional billboard-advertising model is priced instead on airtime, similar to TV ads.

In Russia, Synaps expects to be operating on 20 to 50 billboards this year. The company is also planning a test in the U.S. this summer, where there are roughly 7,000 digital billboards, a number growing at 15 percent a year, according to the company. (By contrast, there are 370,000 conventional billboards.) With a row of digital billboards along a road, they could roll the ads as the cars move along, making billboard advertising more like the storytelling style of television and the Internet, says Synaps’s cofounder Alex Pustov.

There are limits to what the company will use its cameras for. Synaps won’t sell data on individual drivers, though the company is interested in possibly using aggregate traffic patterns for services like predictive traffic analysis and the sociodemographic analysis of commuters versus residents in an area, traffic emissions tracking, or other uses.

Out of safety concerns, license plate data is encrypted, and the company says it will comply with local regulations limiting the time this kind of data can be stored, as well.

Well that’s alright then! 😉

Scaring you into ‘digital safety’

I’ve been thinking about the adverts from Barclays about ‘digital safety’ in relation to some of the themes of the module about technology that I run. In particular, about the sense of risk and threat that sometimes gets articulated about digital media and how this maybe carries with it other kinds of narrative about technology, like versions of determinism for instance. The topics of the videos are, of course, grounded in things that do happen – people do get scammed and it can be horrendous. Nevertheless, from an ‘academic’ standpoint, it’s interesting to ‘read’ these videos as particular articulations of perceived norms around safety/risk and responsibility, for whom, and why in relation to ‘digital’ media… I could write more but I’ll just post a few of the videos for now…

‘Automated’ sweated labour

Charlie Chaplin in Modern Times

This piece by Sonia Sodha (Worry less about robots and more about sweatshops) in the Grauniad, which accompanies an episode of the Radio 4 programme Analysis (Who Speaks for the Workers?), is well worth checking out. It makes a case that seems to be increasing in consensus – that ‘automation’ in particular parts of industry will not mean ‘robots’ but pushing workers to become more ‘robotic’. This is an interesting foil to the ‘automated luxury communism’ schtick and the wider imaginings of automation. If you stop to think about wider and longer term trends in labour practices, it also feels depressingly possible…

This is the underbelly of our labour market: illegal exploitation, plain and simple. But there are other legal means employers can use to sweat their labour. In a sector such as logistics, smart technology is not being used to replace workers altogether, but to make them increasingly resemble robots. Parcel delivery and warehouse workers find themselves directed along exact routes in the name of efficiency. Wrist-based devices allow bosses to track their every move, right down to how long they take for lavatory breaks and the speed with which they move a particular piece of stock in a warehouse or from the delivery van to someone’s front door.

This hints at a chilling future: not one where robots have replaced us altogether, but where algorithms have completely eroded worker autonomy, undermining the dignity of work and the sense of pride that people can take in a job well done.

This fits well with complementary arguments about ‘heteromation‘ and other more nuanced understandings of what’s followed or extended what we used to call ‘post-Fordism’…

The dystopian ‘megacity’ future according to US Defence Dept.

dystopian city

Via the Intercept.

Megacities Urban Future, the “Emerging Complexity,” from Philippe Desrosiers on Vimeo.

According to a startling Pentagon video obtained by The Intercept, the future of global cities will be an amalgam of the settings of “Escape from New York“ and “Robocop“ – with dashes of the “Warriors“ and “Divergent“ thrown in. It will be a world of Robert Kaplan-esque urban hellscapes – brutal and anarchic supercities filled with gangs of youth-gone-wild, a restive underclass, criminal syndicates, and bands of malicious hackers.

At least that’s the scenario outlined in “Megacities: Urban Future, the Emerging Complexity,” a five-minute video that has been used at the Pentagon’s Joint Special Operations University. All that stands between the coming chaos and the good people of Lagos and Dhaka (or maybe even New York City) is the U.S. Army, according to the video, which The Intercept obtained via the Freedom of Information Act.

65% of future non-existent jobs (which doesn’t exist) 70% of jobs automated (just not yet)

Twiki the robot from Buck Rogers

The future of work is the work of imagination. We are, repeatedly, and have been for a while, bombarded with (pseudo-)facts about what the future of work will bring. These are, of course, part of well-known, long-standing, narratives about ‘innovation’, ‘growth’, technological advance and, of course, ‘automation’.

Martin shared a good post by , on his site Long View on Education, about some persistent kinds of story around the nature of work our schools are preparing children for, or not. Here’s  an abridged, and selective, version of the story…

“The top 10 in demand jobs in 2010 did not exist in 2004. We are currently preparing students for jobs that don’t exist yet, using technologies that haven’t been invented, in order to solve problems we don’t even know are problems yet.”

Shift Happens videos (2007).

People repeat the claim again and again, but in slightly different forms. Sometimes they remove the dates and change the numbers; 65% is now in fashion. Respected academics who study education, such as Linda Darling-Hammond (1:30), have picked up and continue to repeat a mutated form of the factoid, as has the World Economic Forum and the OECD.

[…]

“By one popular estimate 65% of children entering primary schools  today will ultimately work in new job types and  functions that currently don’t yet exist. Technological  trends such as the Fourth Industrial Revolution will  create many new cross-functional roles for which  employees will need both technical and social and analytical skills. Most existing education systems at all levels provide highly siloed training and continue a  number of 20th century practices that are hindering  progress on today’s talent and labour market issues.  “¦  Businesses should work closely with governments,  education providers and others to imagine what a true 21st century curriculum might look like.”

The WeF Future of Jobs report

[…]

Cathy Davidson (May 2017) explains up how she came to the factoid:

“I first read this figure in futurist Jim Carroll’s book, Ready, Set, Done (2007). I tracked his citation down to an Australian website where the “65%” figure was quoted with some visuals and categories of new jobs that hadn’t existed before. “Genetic counseling” was the one I cited in the book.

After Now You See It appeared, that 65% figure kept being quoted so I attempted to contact the authors of the study to be able to learn more about their findings but with no luck.  By then, the site was down and even the Innovation Council of Australia had been closed by a new government.”

The BBC radio programme More or Less picks up the story from here, demonstrating how it most likely has no factual basis derived from any identifiable source (there never was an Innovation Council of Australia, for example).

Davidson sort of defends this through dissimulation, in an interview for More or Less, by saying she believes that 100% of jobs have been affected by ‘the digital era we now live in’.

As Audrey Watters has highlighted, statistics like this and the appeal for a ‘disruption’ of education by the tech sector to teach ‘the skills of the future’ etc. can be reasonably interpreted as a marketing smoke screen – ‘the best way to predict the future is to issue a press release’.

An allied claim, that fall within the same oeuvre as the “65%” of not-existing jobs (or should that be non-existent?), is the various statistics for the automation of job roles, with varying timescales. A canonical example, from another “thought leader” (excuse me while I just puke in this bin), is from WIRED maven Kevin Kelly:

There are an awful lot of variations on this theme, focusing on particular countries, especially the USA, or particular sectors, or calculating likelihoods for particular kinds of jobs and so on and so on. This is, of course, big business in and of itself – firms like Deloitte, McKinsey and others sell this bullshit to anyone willing to pay.

What should we make of all this..?

There are a few interpretations we can make of this genre of ‘foresight’. Alongside several other academics I have written about particular ways of communicating possible futures, making them malleable-yet-certain in some way, as a ‘politics of anticipation‘. This politics has various implications, some banal some perhaps more troubling.

First, you might say it’s a perfectly understandable tendency, of pretty much all of us, to try and lend some certainty to the future. So, in our adolescent know-it-all way, we are all wont to lend our speculations some authority, and statistics, however spurious, is a key tool for such a task.

Second, and perhaps allied to the first, is the sense in which methods for speculation become formalised and normative – they’re integrated into various parts of institutional life. So, it becomes normal to talk about speculative (spurious?!) statistics about a future of work, education etc. in the same tone, with the same seriousness, and the same confidence as statistics about a firm’s current inventory, or last year’s GDP trends. Of course, all statistics, all facts, have conditions and degrees of error and so if the calculation of trends for past events is open to change, the rationale might be, perhaps future trends are just as reliable (there’s all sorts of critique available here but I’m not going to delve into that). In this way, consultancies can package up ‘foresight’ as a product/service that can be sold to others. “Futures” are, of course, readily commodified.

Third, an ideological critique might be that it is precisely these forms of storytelling about the redundancy or insufficiency of the labour force that allows those with the large concentrations of capital to accrue more by demeaning the nature of work itself and privatising profits upwards. If we are repeatedly told that the work that generates the good and services that move through our economy is worth less – because it can be automated, because it is ‘out-dated’, because there are other kinds of superior ‘skilled’ work – then it perhaps becomes easier to suppress wage growth, to chip away at labour rights and render work more precarious. Gloomy I know. However, some data (oh no! statistics!) Doxtdator has in his blogpost (and the kinds of data David Harvey uses in his books, such as The Engima of Captial) could be seen as backing up such arguments. For example (source):


These sorts of graphs, tell a different story about yesterday’s future – which didn’t lead to families reaping the rewards of automation and increased productivity by profiting from a share in increased leisure time (following JM Keynes), but rather delivered the profits of these trends to the “1%” (or even the “0.1%”) by massively increasing top executive salaries while keeping wider wage growth comparatively low, if not stagnant. I’m not an economist, so I don’t want to push my luck arguing this point but there are folk out there who argue such points pretty convincingly, such as David Harvey (though see also economic critiques of the ‘zombie’ automation type of argument).

Ultimately, I am, personally, less interested in the numbers themselves – who knows if 65% of today’s school children will be doing new jobs that represent only 70% of the total work we currently undertake?!  I’m more interested in the kinds of (speculative) truth-making or arguing practices they illustrate. The forms of speculative discourse/practice/norms about technology and work we’re all involved in reproducing. It seems to me that if we can’t fathom those things, we’re less able to care for those of us materially affected by what such speculation does, because, of course, sometimes speculation is self-fulfilling.

To try to advance some discussions about the kinds of technological and economic future that get proposed, gain momentum and become something like “truths”, I’ve been puzzling over the various ways we might see the creation of these economic statistics, the narrating of technological ‘innovation’ in particular ways, and the kinds of stories ‘critical’ academics then tell in analysing these things as making up collectively some form of collective imagination. I started out with ‘algorithms’ but I think that’s merely one aspect of a wider set of discourses about automation that I increasingly feel need to be addressed. My placeholder term for the moment is an “automative imaginary” ~ a collective set of discourses and practices by which particular versions of automation, in the present and the future, are brought into being.

Responsive media

personal media

It’s interesting to compare competing interpretations of the same ‘vision’ for our near-future everyday media experience. They more or less circle around a series of themes that have been a staple of science fiction for some time: media are in the everyday environment and they respond to us, to varying degrees personally.

On the one-hand some tech enthusiasts/developers present ideas such as “responsive media“, a vision put forward by a former head of ubiquitous computing at Xerox PARC, Bo Begole. On the other hand, sceptics have, for quite some time, presented us with dystopian and/or ‘critical’ reflections on the kinds of ethical and political(economic) ills such ideas might mete out upon us (more often than not from a broadly Marxian perspective), recently expressed in Adam Greenfield’s op-ed for the Graun (publicising his new book “Radical Technologies”).

It’s not like there aren’t plenty of start-ups, and bigger companies (Begole now works for Huawei), trying to more-or-less make the things that science fiction books and films (often derived in some way from Phillip K Dick’s oeuvre) present as insidious and nightmarish. Here I can unfairly pick upon two quick examples: the Channel 4 “world’s first personalised advert” (see the video above) and OfferMoments:

While it may be true that many new inventors are subconsciously inspired by the science fiction of their childhoods, this form of inspiration is hardly seen in the world of outdoor media. Not so for OfferMoments – a company offering facial recognition-powered, programmatically-sold billboard tech directly inspired by the 2002 thriller, Minority Report.

I’ve discussed this in probably too-prosaic terms as a ‘politics of anticipation’, but this, by Audrey Watters (originally about EdTech), seems pretty incisive to me:

if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

I have come to think this has produced a kind of orientation towards particular ideas and ideals around automation, which I’ve variously been discussing (in the brief moments in which I manage to do research) as an ‘algorithmic’ and more recently an ‘automativeimagination (in the manner in which we, geographers, talk about a ‘geographical imagination’).

Deum ex machina? a journey into transhuminism as/via religion

Machines from the gods… or…

God in the machine: my strange journey into transhumanism – podcast

After losing her faith, a former evangelical Christian felt adrift in the world. She then found solace in a radical technological philosophy – but its promises of immortality and spiritual transcendence soon seemed unsettlingly familiar

An interesting and compelling podcast of a ‘long read’ for the Graun by Meghan O’Gieblyn that eloquently articulates the not-so-crypto-theistic nature of (some) transhumanism(s).

Here’s a paragraph to whet the appetite:

By this point I’d passed beyond idle speculation. A new, more pernicious thought had come to dominate my mind: transhumanist ideas were not merely similar to theological concepts but could in fact be the events described in the Bible. It was only a short time before my obsession reached its culmination. I got out my old study Bible and began to scan the prophetic literature for signs of the cybernetic revolution. I began to wonder whether I could pray to beings outside the simulation. I had initially been drawn to transhumanism because it was grounded in science. In the end, I became consumed with the kind of referential mania and blind longing that animates all religious belief.