Category Archives: vision

Reblog > Drones, sport and ‘eventization’

Last month Patrick Crogan wrote a great, pithy blogpost about the conduct and conceptualisation of war in relation to the relentless gaze of drones arrayed with computer vision technologies that originate from professional sports video analysis. Folding together Derek Gregory’s recent detailed reading of Gregoire Chemayou’s ‘Theorie du Drone’, the work of the International Committee for Robot Arms Control and Bernard Stiegler’s theorisation of the industrialisation of memory, Patrick highlights how software systems embedded within the complex surveillance and attacking capabilities of drones that are becoming quasi-autonomous are operating in the very constitution of the events of war, not merely reacting or functioning as equipment, but proactively producing events. Reproduced below…


This post is to start some ideas circulating from work I am increasingly becoming preoccupied with concerning military robotics and AI, as a particular (and also particularly important, in many ways) case of automatizing technologies emerging today. This is a big topic attracting an increasing amount of critical attention, notably from people like Derek Gregory (whose Geographical Imaginations blog is a treasure trove of insights, lines of inquiry and links on much of the work going on round this topic), and Lucy Suchman who is part of the International Committee for Robot Arms Control and brings a critical STS perspective to drones and robotics on her Robot Futures blog.


I’m reading French CNRS researcher Gregoire Chamayou’s Théorie du drone, a book which has made a powerful start on the task of philosophically (as he has it) interrogating the introduction of these new weapons systems which are transforming the conduct, conceptualisation and horizon of war, politics and the technocultural global future today. Many riches in there, but I just read (p. 61) that the U.S. Air Force Intelligence, Surveillance and Reconnaissance Agency, looking for ways to deal with the oceans of video data collected by drones constantly overflying territory with unblinking eyes, obtained a version of software developed by ESPN and used in their coverage of American football. The software provides for the selection and indexing of clips from the multiple camera coverage of football games to enable their rapid recall and use in the analysis of plays which (as anyone who watches NFL or College football coverage knows takes up much more time than the play itself in any given broadcast). The software is able to archive footage (from the current or previous games) in a manner that makes it immediately available to the program director in compiling material for comparative analysis, illustration of player performance or tactical/strategic traits of a team, etc. The player and the key play can be systematically broken down, tracked in time, identified as exceptional or part of a broader play style, and so forth.

These capacities are precisely what makes the software desirable to the US Air Force inasmuch as the strategic development of drone operations deals with effectively the same analytical problem: the player and the key play, the insurgent/terrorist and the key act (IED, ambush, etc).  The masses of video surveillance of the vast ‘gridded’ space of battlespace, a vast ‘arena’ similarly zoned in precisely measurable slices (but in 3D) must be selectable, taggable and recoverable in such a way to be usable in the review of drone operations. And the logic (or logistic as Virilio would immediately gloss it) of this treatment of ‘battlespace’ is realised in what has recently emerged unofficially from the Obama administration-Pentagon interface as the emerging strategic deployment of drones by the CIA (which runs a significant and un-reported proportion of drone operations globally). This targeting strategy is based precisely on pattern analysis both in tracking known suspected enemies of the state and in identifying what are called ‘signature targets’ (the signature referring to a ‘data signature’ of otherwise unidentified individuals, one that matches the movements and associations of a known insurgent/terrorist — see Gregory’s post on this in Geographic Imaginations ).

The ethical and juridical-political dimensions of this strategy are coming under increasing and much-needed scrutiny (more to come on this). As a media/games theorist, the striking thing about this felicitous mutuality of affordances between pro sport mediatisation technics and those in development for the conduct of drone operations is the reorientation to space it not only metaphorically suggests (war, become game now steering the metaphoric vehicle back in the other direction) but enacts through an ‘eventization’ (Stiegler) operating in the very constitution of the ‘event’ of war or counter-insurgency (or what James Der Derian called ‘post war warring’) . While there are many complicit actors benefiting from the profitable mediatized evolution of American football into a protracted, advertising friendly broadcast, no such ‘partnership’ exists between key players ‘on the ground’ and those re-processing their data trails.

A brief history of the future of pervasive media – Talk at the Pervasive Media Studio

I will be giving a talk at the Pervasive Media Studio on Friday 14th May entitled ‘A brief history of the future of pervasive media’, which is broadly derived from my PhD research. The talk will be open to the public, so please feel free to come along! Here’s the bumpf:

Pervasive media, and the various forms of computing from which they are derived, stem from a tradition of anticipating future scenarios of technology use. Sam Kinsley’s PhD research concerned the ways in which those involved in pervasive computing research and development imaginatively envision future worlds in which they’re technologies exist.

This lunchtime talk examines the ways in which future people, places and things are imagined in the research and development of pervasive media. Examples taken from prospective pervasive computing research and development in the last twenty years will be explored as emblematic of such future gazing. The aim is to provide a broad means of understanding the rationales by which technological futures are invoked so that pervasive media producers can critically reflect on the role the idea of the future in their work. Such an understanding is important because a history of computing is in large part a history of places and things that were never created – a history of yesterday’s tomorrows.

Ubiquitous Computing: Mark Weiser’s vision and legacy

This is a sub-section of the first chapter of my PhD thesis, its my attempt to reflect on Mark Weiser’s legacy in the field of ubiquitous computing.

2009 marked the tenth anniversary of the death of Mark Weiser, a man that many believe earned the title ‘visionary’. As a Principal Scientist and subsequently Chief Technology Officer at Xerox PARC, Weiser has been identified as the ‘godfather’ of ubiquitous computing (ubicomp). In the years since his demise many of the ideas that Weiser championed have come to greater prominence. As Yvonne Rogers points out this influence has been felt across industry, government and commercial research, from the European Union’s ‘disappearing computer’ initiative to MIT’s ‘Oxygen’, HP’s ‘CoolTown’ and Philips ‘Vision of the Future’. All of these projects aspired to Weiser’s tenet of the everyday environment and the objects within being embedded with computational capacities such that they might bend to our (human) will. Within the research community, as Genevieve Bell and Paul Dourish remark ‘almost one quarter of all the papers published in the ‘Ubicomp’ conference between 2001 and 2005 cite Weiser’s foundational articles’.

Continue reading Ubiquitous Computing: Mark Weiser’s vision and legacy

Ironic vision of augmented (hyper)reality

Timo Arnall points out this video, by a masters student(!), that depicts a slightly nightmarish, yet amusingly ironic, vision of a possible future world with augmented reality, whereby you earn money by subjecting yourself to advertising and depend upon instructions from the system for even basic tasks.

The latter half of the 20th century saw the built environment merged with media space, and architecture taking on new roles related to branding, image and consumerism. Augmented reality may recontextualise the functions of consumerism and architecture, and change in the way in which we operate within it.

A film produced for my final year Masters in Architecture, part of a larger project about the social and architectural consequences of new media and augmented reality.

Augmented (hyper)Reality by Keiichi Matsuda

[via Timo Arnall & Berg]

Social glue, or: What’s the ‘IMAP’ equivalent for social media?

The launch of Google Buzz has prompted me to raise some things that have been lurking in the back of my mind for some time. These thoughts began when the discussion about the ‘walled garden’ nature of facebook et al. emerged a couple of years ago and lead to the initiation of tentative steps towards interconnection and (that horribly overused word) ‘openness’ in the guise of ‘friend connect‘ and ‘facebook connect‘. Twitter was already sort of ahead of the game with their API, as the glut of applications for ‘tweeting’ attests.

Lots of talk on the interweb’s various locations for commentary centred on the social web, real-time web etc. being based in discrete platforms. This remains somewhat true today. We can certainly connect these services together and form extraordinary information gathering tools in the form of what Howard Rheingold usefully describes as ‘personal information dashboards’, using services such as netvibes and pipes in concert with the various APIs for the platforms we all use. However, this all takes quite a bit of effort at the moment [but! for a good tutorial, please check out Howard’s super videos: #1, #2, #3].

However, for the majority of internet users to usefully stick all of these various platforms and applications together there needs to be a much lower threshold of effort to achieve the desired results. Jyri Engstrom, co-founder of Jaiku and one of the big brains apparently behind ‘Buzz’, articulates the argument well here:

Most of the conversation over the last 24h has been centered around predicting if “Buzz will kill” this or that service. This debate starts from the assumption that Buzz and the rest of the social web are mutually exclusive. It’s arguably fair to assume so, considering all the social networks we’ve got so far are silos. To no longer assume everyone has to be using the same branded system to talk to each other is disruptive to the tech biz discourse, which is obsessed with turning everything into a war over which company is “the one”. So much so that the alternative is almost unthinkable. If the new standards succeed, in 2015 we’ll look back and shake our heads like we shake our heads today at the early days of proprietary phone networks and email systems. The thought that you couldn’t call, text or email people just because they happen to be on another phone operator or email client is laughable. Doubly so for the social Web. The reason many of the current commentators miss this point is that they are, in the immortal words of Walt Whitman, “demented with the mania of owning things.” (borrowing that quote from Doc Searls)

What are these ‘new standards’ then? Well, if we’re to take our cue from Google they consist of the development of the various existing data formats for syndication: extensions of Atom and RSS, such as activity streams and mediaRSS. There may well be families and hierarchies of such data formats and I’m sure hundreds, if not thousands, of developers are already working on creating these things. But I’m still left with this question: what if I don’t want my stuff (information, pictures, etc.) always held on servers owned by Google, facebook etc? What if I’m happy for such ‘stuff’ to be transient? Which of course such companies don’t want because your ‘stuff’ is incredibly valuable and they want to mine it for all its worth. Nevertheless, my half-formed thoughts are: what’s the equivalent to IMAP for social media?

To my mind, the missing ‘glue’ for the social networking ecosystem is the missing service architecture to allow all of us to host our own streams and tie together the various bits of our rapidly growing, perhaps increasingly ‘public’, ‘digital identity’. Social media could easily be distributed, just as blogs and ‘web 1.0’ are. What’s to stop a community creating something like wordpress or drupal for activity/social streams? If the standards suggested by Google really are that versatile then all that is necessary is to create a system that imports/exports using them. Search would be renewed in its importance, but companies/services like twitter could remain successful by facilitating that search functionality and helping users subscribe to one another’s feeds/streams.

A couple of years ago I thought about it in terms of a ‘meta-platform’ or ‘platform for platforms’, but we’ve kind of seen these, in the form of friendfeed and their ilk. Now I think, well, it could all still happen over port 80 with web traffic – it just needs an architecture to allow people to stick things up on their own servers and interconnect. If we take what Jyri says (above) seriously, it seems to me the logical step is to really set the social web ‘free’ and build the elements required to allow people to host their own activity streams. Maybe this is already happening. To build on Jyri’s theme of looking to a future and to paraphrase Alan Kay: “the best way to predict the future is to [build] it”. Go to it!!

Reflecting on Mark Weiser’s legacy ten years on

The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.

-Mark Weiser, 1991 “The Computer for the 21st Century” Scientific American

The goal is to achieve the most effective kind of technology, that which is essentially invisible to the user… I call this future world “Ubiquitous Computing” (Ubicomp).

-Mark Weiser, 1993 “Some Computer Science issues in Ubiquitous Computing” Communications of the ACM

2009 marks the tenth anniversary of the death of a man that many believe earned the title ‘visionary’, his name was Mark Weiser. As a Principal Scientist and subsequently Chief Technology Officer at Xerox PARC, Weiser is best known as the ‘godfather’ of ubiquitous computing. In the years since his demise many of the ideas that Weiser championed have come to greater prominence. As Yvonne Rogers points out this influence has been felt across industry, government and commercial research, from the EU’s ‘disappearing computer’ initiative to MIT’s ‘Oxygen’, HP’s ‘CoolTown’ and Phillips ‘Vision of the Future’. All of these projects aspired to Weiser’s tenet of the everyday environment and the objects within being embeded with computational capacities such that they might bend to our (human) will. Within the research community, as Bell and Dourish remark, ‘of the 108 papers comprising the Ubicomp conference proceedings between 2001 and 2004, fully 47% of the papers are oriented towards a proximate (and inevitable) technological future’ and ‘almost one quarter of all the papers published in the Ubicomp conference between 2001 and 2005 cite Weiser’s foundational articles’.

Continue reading Reflecting on Mark Weiser’s legacy ten years on

Neologism ~ “spectaculation”

Science Buzz! Flickr photo by Unhindered by Talent

I’m no fan of coining neologisms, but(!) I think I have a need for a word that pithily and succinctly allows me to cast mild derision at certain forms of speculation. It seems to be possible to carve out a career by publicising one’s work by stretching beyond the conventional limits of the remit of a particular project and making grand claims about ‘progress’. This is often identifiable by the monotonous use of phrases such as “in the future you/we will…”. Sometimes this is excusable, people get excitedly exuberant about their research and ideas (sometimes it’s done for you!), but other times it is clearly a deliberate tactic. Thus, I think we can describe what they’re up to as ‘spectaculation’. For it is not idle speculation but taking a speculative claim and widening its application, making it sound more important and thus more news-worthy i.e. spectacular. So we arrive at spectaculation, and of course somebody else (probably lots of people actually) has thought of this already (in a slightly different sense): credit where it’s due.

Image credit: Flickr user ‘Unhindered by Talent’.

Ubiquitous Computing video circa. 1991

“Coined by the Xerox Palo Alto Research Center’s (PARC) Computer Science Laboratory (CSL), [Ubiquitous Computing] describes a vision of the future. Just as electric motors have disappeared into the background of everyday life, PARC scientists envision a future where mobile computational devices will be similarly transparent. Potentially numbering the 100s per person these devices are nothing like those you use today. They are mobile. They know their location, and they communicate with their environment.”

I have no idea if I’m allowed to put this up but it seems a desperate shame that this video isn’t held in one complete file, easily accessible to the public and to researchers, given the historical significance of the work conducted on ubicomp at PARC by Mark Weiser et al. during hte late 80s early 90s. Please see the original files here: and read more about Mark Weiser by sticking his name in Google.

Please note that I had to edit out 2 minutes of the more technical stuff to get the video down to under 10mins.

‘A Vision’ – Simon Armitage

The future was a beautiful place, once.
Remember the full-blown balsa-wood town
on public display in the Civic Hall.
The ring-bound sketches, artists’ impressions,
blueprints of smoked glass and tubular steel,
board-game suburbs, modes of transportation
like fairground rides or executive toys.
Cities like dreams, cantilevered by light.
And people like us at the bottle-bank
next to the cycle-path, or dog-walking
over tended strips of fuzzy-felt grass,
or motoring home in electric cars,
model drivers. Or after the late show –
strolling the boulevard. They were the plans,
all underwritten in the neat left-hand
of architects – a true, legible script.
I pulled that future out of the north wind
at the landfill site, stamped with today’s date,
riding the air with other such futures,
all unlived in and now fully extinct.

From Simon Armitage’s collection Tyrannosaurus Rex Versus the Corduroy Kid.

Thamesmead South, London – a vision and an actuality

GLC Architects vision of Thamesmead South

Picture credit: Flickr user Iqbal Aalam

Thamesmead. Bexley. London

Pciture credit: Flickr user joseph beuys hat