Over on the programmable city website there’s news of a new paper by Jim Merricks White on the anticipatory logics of smart cities… I have previous here so it’ll be an interesting read!
Over on her blog Visual / Method / Culture, Gillian Rose has written an interesting blogpost about the politics of ‘visibility’ online and the rendering visible of digital/electronic things and their infrastructures. She draws on some interesting work (that I didn’t know) by Shannon Mattern and Adam Rothstein.
This got me thinking about a couple of other related things, that compliment and provide further examples for Gillian’s argument…
Of course a big chunk of the early geographical fascination with all things web/internet was about visualisations and mappings as Dodge & Kitchin’s “Atlas of Cyberspace” attests… It also brings to mind the recent cases of the “right to be forgotten” in search results, and the ways in which those with the means can render what they deem undesirable “invisible” – the corollary of which is the increasing practices of public ‘shaming’ [see this great essay by Ben Jackson in the LRB], and the repugnant exploit of “revenge porn”.
So the politics of visibility Gillian teases out — and perhaps, here, more the *in*visible than the visible — is refracted through existing politics of those that have (money, status and power and so the means to render themselves selectively visible) and those that do not – who increasingly find themselves visible to all sorts of agencies.
read the whole blogpost.
This video, edited by Tony Zhou, offers a nice articulation of the kinds of imaginative strategies for attempting to represent, on film, the ways we interact through screen-based devices (i.e. computers and phones). Zhou demonstrates how the clunky attempts at verité don’t work – showing the screen while someone types takes ages and so its expensive. Instead, employing abstraction – such as floating text bubbles –advances the narrative without the need to film screens.
Of course, because these kinds of sequences are in a linear narrative it necessitates the character reading the message instantly – whereas a lot of our screen-based interactions are asynchronous. Its also a narrative device that has been employed in ‘design fiction‘ films to illustrate the abstract communication that takes place by quasi-autonomous software programmes that (will/would/may) underpin the ‘internet of things’. For how else are we to represent the apparently immaterial and abstract mechanisms that constitute what Kitchin and Dodge have called ‘coded spaces’ (and/or coded objects, infrastructures and so on)? For example…
What this does, of course, is to render processes that operate in diverse temporalities (like the not-quite-speed-of-light, the speeds of electro-magnetic radiation) which are frequently cyclical and sporadic (CPU cycles and so on) and organised in ways that are oriented towards different modes of legibility (for speed of processing) in the linear conventions of film/tv. We have nuanced understandings of how these things operate within our daily lives (up to a point – they’re mostly figured around individual sensibilities rather than complex collectives) but I’m not really convinced that we have, yet, have a nuanced means of articulating these things.
To adapt what Derek McCormack suggests, these are attempts to represent the ‘abstraction [that] is a constituent element of the background infrastructures that allow life to show up and register as experience’ [p. 720]. The reason I’m framing it this fairly awkward way is that I think what Zhou’s video points to is that it is increasingly difficult for the ‘lay person’ to appreciate and understand the complex assemblages (or, rather, ‘agencements‘) of electronic systems that intimately affect how we live our lives. They are manifestly abstract, but this abstraction is frequently not treated in an affirmative sense but rather in an obfuscatory way.
Thinking about Zhou’s video and the growing impetus amongst social scientists to study the complexities of contemporary networked technologies, I am drawn to the idea that perhaps the kinds of visual devices used in the kinds of videos I’ve discussed here ought to be further employed to help describe and explain how contemporary processes of mediation function… Probably something the students on courses like ‘Design Informatics‘ are already doing..? Its certainly something that ought to be part of any kind of ‘digital studies‘.
Postscapes have an annual ‘Internet of Things‘ awards, with projects nominated under various categories for which the viewing public (with net access) are invited to vote. This is the third year of the awards and the second in which I have been aware of a ‘design fiction‘ category.
Postscapes identify/define design fiction in the following way:
Grounded as much in imagination as reality, design fiction is about bending the rules. It’s about asking “What if?”, and using the remains to probe the edges of our changing world.
The results may only be props or prototypes — but the best ones, as recognized by the Design Fiction award, end up helping us navigate our near futures and the stories they share.
Last year (in the awards for 2012), the ‘design fiction’ category was ‘won’ by the slightly creepy and maybe a little bit flawed ‘ear hacking’:
In which we are asked to believe that there is a direct correlation between basic features of audio playback and our activity – in particular as runners. Anyway, it serves to demonstrate that humour works well in fostering and audience for design fiction. Notably, this ‘beat’ Google’s now infamous ‘project glass‘ video which, of course, was the forerunner for ‘glass‘.
In the running for the 2013 design fiction awards are a few interesting projects, you can see the whole list on the posts capes website but here are a couple that I think are in some way provocative…
Anne Galloway’s Design Culture Lab investigations of the Merino wool industry (see ‘counting sheep‘), with the lovely ‘bone knitter‘ that produces custom knitted casts for knitting back together broken limbs, and the rather unsettling ‘PermaLamb‘, vision of custom GM lamb production. Check out the ‘counting sheep‘ website for more, its worth exploring.
James Bridle’s ‘surveillance spaulder’, a pithy and playful imagining of a device that viscerally reminds the wearer of their being surveilled:
Spike Jonze’s soon-to-be released film ‘Her’, in the tradition of various imaginings of AI (e.g. Brian Aldiss):
[NB. posting them here doesn’t necessarily mean approval…]
Shawn Sobers linked to a funny comment piece by Stewart Heritage on the Grauniad riffing on the idea of the ‘Internet of Things‘, with the main schtick being that there is such a lack of imagination behind the implementation of such ‘things’ that if we extrapolate then surely the interlinked ‘things’ will do us a mischief… Now, this is humorous, of course, but humour is also a good way to get us to think about why on earth we’re letting ourselves in for a vision of such ‘things’. I am not ‘anti-‘ technological innovation, I am merely arguing that we need to be critically reflective of the motivation behind the development of some of these systems and devices. The same kind of critical reflection we have seen in relation to the ‘MOOC revolution‘…
Here’s one of the funny bits from the article, extrapolating from actually existing technologies into the more ridiculous:
The Internet of Things has already produced some cool-sounding devices. There is the tennis racket kitted out with motion sensors to help you improve your game. There’s the parking sensor that directs your satnav to an empty spot. The basketball that, when bounced on the floor, automatically tells your home entertainment setup to start playing basketball-related content. The bridge that tells people when it’s about to collapse. The smoke alarm that switches itself off and works in conjunction with your electrical outlets to burn you to death in your sleep because it has become jealous of your capacity for love. The remote cave that fills itself with bears and poisonous snakes whenever it detects that someone has started sleeping in it because they’ve convinced themselves that their entire house has grown sentient and suddenly turned against them. All sorts, really. It’ll be fun.
Following on from the translation I made of Bernard Stiegler’s reflections on how digital (media) technologies can perform a valuable pedagogical role, I wanted to highlight that Martin Weller has given a very cogent and pointed critique of the fairly common narrative of ‘disruptive technology’ in relation to MOOCs.
This brings together two aspects of my own research: the ways in which those involved in computing R&D look to the future and anticipate the kinds of technologies they may want to produce (and the kinds of politics that produces); and what can be seen as the progressive commoditisation of our capacities to think and feel by certain applications of digital media.
Firstly, as Martin identifies in his blogpost, there is a widespread discourse of the necessity of breakthrough, disruption and revolution in the mythology of the aspirational technology sector located in Silicon Valley. This has some obvious foundations in the need to continually destroy and re-create new markets in a finite global system of capital (as David Harvey cogently diagnoses). It also has an interesting basis in alternative discourses of progress on the counterculture movements in that same region of the US, with Stewart Brand (founder of the Whole Earth Network) a significant exponent of libertarian thought in the growing ICT industry that translated into the creation of WIRED magazine as the purveyor of this techno-economic orthodoxy (for more on this see Fred Turner’s brilliant book).
Martin offers the insight that the rather clunky, and somewhat messianic, narrative of the need for an external agent to intervene in a slow, inefficient, outmoded (and so on) sector, central to the disruptive technology spiel, allows sharp and charismatic entrepreneurs to step in as the pseudo-saviour, i.e. Sebastian Thrun of Udacity and others of his ilk. Criticism of the West Coast (capitalist) mythology is not new, of course, Richard Barbrook and Andy Cameron offered a critique of the ‘Californian Ideology‘ in the 1990s, and Stiegler has criticised the ‘American model’ of laissez faire ‘cultural capitalism’ led by the ‘programming industries’ of new media (“functionally dedicated to marketing and publicity” [p. 5]) in his The Decadence of Industrial Democracies. Indeed, we can look back to Adorno and Horkheimer’s stinging critique of the Culture Industry as a formidable progenitor. What we can perhaps take from that line of argument is that arguing for a supposed ‘greater’ choice is actually a deception, the ‘choice’ is merely to consume more.
Others, who set themselves up as more thoughtful commentators have also weighed in on the side of the need for a disruption/revolution. Martin highlights that Clay Shirky has also parroted the, now well-worn, technological deterministic ark of argument. As with others, Shirky suggests that ‘education is broken‘ and must therefore be fixed by shiny new technology, in the form of MOOCs . Some proponents of this line of argument suggest that this would bring wider access to university level learning. There are a few (also well-worn but compelling) critiques of this line.
An obvious initial critique, as Martin argues in his blogpost, is that the ‘education is broken and so it requires a technological fix’ argument has gained so much traction because it is neat and easy to digest by journalists. A simple story with a clear solution is always going to trump the slightly messy, perhaps convoluted, and multiple stories that approximate the truth, for which there are unclear and troubling political solutions that require quite a lot of explanation and working through.
Furthermore, the existing evidence of engagement with MOOCs also somewhat contradicts the rosy picture painted by their evangelists. Completion rates for MOOCs tell a mixed story (as Martin has pointed out in other blogposts) and this perhaps speaks to the negotiation (by both students and course designers/leaders) of legitimacy and value for these courses – this is a sector still very much in flux. Those who are passionate about providing equal and wider access to university level education are torn by the desire to offer courses that open up (frequently excellent) materials for anyone to access but this is, of course, only a fraction of what we as university lecturers and students do when running and participating in courses.
We’re all, of course, increasingly proficient at consuming content online and MOOCs leverage that behaviour. What such systems are not so good at is providing something analogous to tutorials. The stand in for this is peer discussion/ support, which, of course, come with their own social and cultural issues around facilitation and particular participants becoming overbearing etc. So, these (socio-)technological fixes are not necessarily a like-for-like stand-in for all of those significant but hard to define benefits of university study within the physical context of an institution. Which is not to say that whatever MOOCs turn into cannot be of value, its just that its neither a direct alternative nor a replacement but rather a new/emerging form of pedagogical practice.
We can also look to the somewhat obvious Marxian critique of the constant clarion for technological revolution that, far from bringing in egalitarian and widespread access to a better form of living, education and so on, it ushers in the creation of a new proletarianised class of knowledge worker, trained, in this case, by machines (the machine learning version of xMOOCs is the example here) and held even further away from access to critical debate and the means of production.
After all, in a ‘mature’ market for technology, devices (and sometimes services) become cheap through mass production and availability. This slashes profit margins and consumer-users become savvy at backwards engineering and ‘modding’. Customers taking power into their own hands is rather undesirable for the corporate technology producer, unless they can co-opt those developments into the next iteration of the product. Thus, constant ‘innovation’ brings with it the maintenance of a premium for the ‘latest’, ‘must-have’ etc. device/service and necessarily excludes those who cannot pay.
One can easily imagine, then, how a stratification of the market would rapidly take hold. Cheaper, gigantic and formulaic courses (with automated marking of assessments) would be seen as lesser ‘products’ than more exclusive courses (with human tutor support). Those with power and money, in this case, would most-likely still send their children to (very expensive) physical universities with small classes, lots of attention from staff and all of the accoutrements of elite institutions.
Leaving that rather depressing argument aside, the framing of this form of consumer market for higher education is very Anglo-American, where degrees have already become a form of currency – for which there isn’t really an alternative. I cannot help wondering what other forms of education are being ignored (and therefore probably saved). The system of apprenticeship in Germany, for example, where more than half of school-leavers enter apprenticeships, which are really valued in society – with a majority of apprentices staying on with their host companies, is very successful and neither needs or could support a Silicon Valley style ‘disruption’.
Where does this leave us with regard to Stiegler’s argument that it is precisely the forms of collaboration that are opened up by digital media that can and should be used to transform higher education? Well, the innovative media supports being created in the guise of MOOCs and so on are neither the envisioned radical break(through) claimed in the silicon valley rhetoric or a pedagogical nosedive. As with all forms of technicity, MOOCs are pharmacological – they have the capacity to be both ‘poison’ and ‘cure’. If we take seriously Stiegler’s challenge that we need ‘to drive a dynamic for the rethinking of the relationship between knowledge and its media [supports] (of which MOOCs are a possible dimension) with the universities and academic institutions’ then we also need to take (very) seriously Martin’s arguments that designing open education courses/experiences is hard. I’m certainly not going to attempt to offer ‘easy’ or glib answers to such a problem here…
If we want the kind of collaborative learners that Stiegler gestures towards do we simply hope that they are self-selecting? Almost like postgraduate education is, with motivated students seeking out the opportunities to learn and contribute to the production of knowledge. That, of course, is a relatively small minority of the student population. Equally,we might consider the example of the proactive producers of peer-to-peer knowledge using platforms like Wikipedia, who are self-selecting and a minority relative to the number of ‘passive’ users of the platform. If the degree remains the only currency for employability within certain sectors and for particular kinds of roles then we retain the significant tension between the ideals of the pursuit and production of knowledge, traditionally at the heart of higher education, and the purchasing of a passport for employment (often in an unrelated field, probably in the financial sector) which the university degree has become in the UK.
It seems to me that it is not the university side of higher education that is broken in the UK (although it is always worthwhile striving for the ideals that underpin it), instead it is the preparation of skilled employment that was once provided by a valued system of apprenticeships and polytechnic institutions that has been not only broken but decimated. The renewal of these complimentary forms of further and higher education, with the new media supports we are using throughout all areas of life, seems to be an immediate and pressing concern.
Last month Patrick Crogan wrote a great, pithy blogpost about the conduct and conceptualisation of war in relation to the relentless gaze of drones arrayed with computer vision technologies that originate from professional sports video analysis. Folding together Derek Gregory’s recent detailed reading of Gregoire Chemayou’s ‘Theorie du Drone’, the work of the International Committee for Robot Arms Control and Bernard Stiegler’s theorisation of the industrialisation of memory, Patrick highlights how software systems embedded within the complex surveillance and attacking capabilities of drones that are becoming quasi-autonomous are operating in the very constitution of the events of war, not merely reacting or functioning as equipment, but proactively producing events. Reproduced below…
This post is to start some ideas circulating from work I am increasingly becoming preoccupied with concerning military robotics and AI, as a particular (and also particularly important, in many ways) case of automatizing technologies emerging today. This is a big topic attracting an increasing amount of critical attention, notably from people like Derek Gregory (whose Geographical Imaginations blog is a treasure trove of insights, lines of inquiry and links on much of the work going on round this topic), and Lucy Suchman who is part of the International Committee for Robot Arms Control and brings a critical STS perspective to drones and robotics on her Robot Futures blog.
I’m reading French CNRS researcher Gregoire Chamayou’s Théorie du drone, a book which has made a powerful start on the task of philosophically (as he has it) interrogating the introduction of these new weapons systems which are transforming the conduct, conceptualisation and horizon of war, politics and the technocultural global future today. Many riches in there, but I just read (p. 61) that the U.S. Air Force Intelligence, Surveillance and Reconnaissance Agency, looking for ways to deal with the oceans of video data collected by drones constantly overflying territory with unblinking eyes, obtained a version of software developed by ESPN and used in their coverage of American football. The software provides for the selection and indexing of clips from the multiple camera coverage of football games to enable their rapid recall and use in the analysis of plays which (as anyone who watches NFL or College football coverage knows takes up much more time than the play itself in any given broadcast). The software is able to archive footage (from the current or previous games) in a manner that makes it immediately available to the program director in compiling material for comparative analysis, illustration of player performance or tactical/strategic traits of a team, etc. The player and the key play can be systematically broken down, tracked in time, identified as exceptional or part of a broader play style, and so forth.
These capacities are precisely what makes the software desirable to the US Air Force inasmuch as the strategic development of drone operations deals with effectively the same analytical problem: the player and the key play, the insurgent/terrorist and the key act (IED, ambush, etc). The masses of video surveillance of the vast ‘gridded’ space of battlespace, a vast ‘arena’ similarly zoned in precisely measurable slices (but in 3D) must be selectable, taggable and recoverable in such a way to be usable in the review of drone operations. And the logic (or logistic as Virilio would immediately gloss it) of this treatment of ‘battlespace’ is realised in what has recently emerged unofficially from the Obama administration-Pentagon interface as the emerging strategic deployment of drones by the CIA (which runs a significant and un-reported proportion of drone operations globally). This targeting strategy is based precisely on pattern analysis both in tracking known suspected enemies of the state and in identifying what are called ‘signature targets’ (the signature referring to a ‘data signature’ of otherwise unidentified individuals, one that matches the movements and associations of a known insurgent/terrorist — see Gregory’s post on this in Geographic Imaginations ).
The ethical and juridical-political dimensions of this strategy are coming under increasing and much-needed scrutiny (more to come on this). As a media/games theorist, the striking thing about this felicitous mutuality of affordances between pro sport mediatisation technics and those in development for the conduct of drone operations is the reorientation to space it not only metaphorically suggests (war, become game now steering the metaphoric vehicle back in the other direction) but enacts through an ‘eventization’ (Stiegler) operating in the very constitution of the ‘event’ of war or counter-insurgency (or what James Der Derian called ‘post war warring’) . While there are many complicit actors benefiting from the profitable mediatized evolution of American football into a protracted, advertising friendly broadcast, no such ‘partnership’ exists between key players ‘on the ground’ and those re-processing their data trails.
I will be giving a talk at the Pervasive Media Studio on Friday 14th May entitled ‘A brief history of the future of pervasive media’, which is broadly derived from my PhD research. The talk will be open to the public, so please feel free to come along! Here’s the bumpf:
Pervasive media, and the various forms of computing from which they are derived, stem from a tradition of anticipating future scenarios of technology use. Sam Kinsley’s PhD research concerned the ways in which those involved in pervasive computing research and development imaginatively envision future worlds in which they’re technologies exist.
This lunchtime talk examines the ways in which future people, places and things are imagined in the research and development of pervasive media. Examples taken from prospective pervasive computing research and development in the last twenty years will be explored as emblematic of such future gazing. The aim is to provide a broad means of understanding the rationales by which technological futures are invoked so that pervasive media producers can critically reflect on the role the idea of the future in their work. Such an understanding is important because a history of computing is in large part a history of places and things that were never created – a history of yesterday’s tomorrows.
This is a sub-section of the first chapter of my PhD thesis, its my attempt to reflect on Mark Weiser’s legacy in the field of ubiquitous computing.
2009 marked the tenth anniversary of the death of Mark Weiser, a man that many believe earned the title ‘visionary’. As a Principal Scientist and subsequently Chief Technology Officer at Xerox PARC, Weiser has been identified as the ‘godfather’ of ubiquitous computing (ubicomp). In the years since his demise many of the ideas that Weiser championed have come to greater prominence. As Yvonne Rogers points out this influence has been felt across industry, government and commercial research, from the European Union’s ‘disappearing computer’ initiative to MIT’s ‘Oxygen’, HP’s ‘CoolTown’ and Philips ‘Vision of the Future’. All of these projects aspired to Weiser’s tenet of the everyday environment and the objects within being embedded with computational capacities such that they might bend to our (human) will. Within the research community, as Genevieve Bell and Paul Dourish remark ‘almost one quarter of all the papers published in the ‘Ubicomp’ conference between 2001 and 2005 cite Weiser’s foundational articles’.
Timo Arnall points out this video, by a masters student(!), that depicts a slightly nightmarish, yet amusingly ironic, vision of a possible future world with augmented reality, whereby you earn money by subjecting yourself to advertising and depend upon instructions from the system for even basic tasks.
The latter half of the 20th century saw the built environment merged with media space, and architecture taking on new roles related to branding, image and consumerism. Augmented reality may recontextualise the functions of consumerism and architecture, and change in the way in which we operate within it.
A film produced for my final year Masters in Architecture, part of a larger project about the social and architectural consequences of new media and augmented reality.