Category Archives: vision

The appeal of the frontier narrative and MOOCs

Following on from the translation I made of Bernard Stiegler’s reflections on how digital (media) technologies can perform a valuable pedagogical role, I wanted to highlight that Martin Weller has given a very cogent and pointed critique of the fairly common narrative of ‘disruptive technology’ in relation to MOOCs.

This brings together two aspects of my own research: the ways in which those involved in computing R&D look to the future and anticipate the kinds of technologies they may want to produce (and the kinds of politics that produces); and what can be seen as the progressive commoditisation of our capacities to think and feel by certain applications of digital media.

Firstly, as Martin identifies in his blogpost, there is a widespread discourse of the necessity of breakthrough, disruption and revolution  in the mythology of the aspirational technology sector located in Silicon Valley. This has some obvious foundations in the need to continually destroy and re-create new markets in a finite global system of capital (as David Harvey cogently diagnoses). It also has an interesting basis in alternative discourses of progress on the counterculture movements in that same region of the US, with Stewart Brand (founder of the Whole Earth Network) a significant exponent of libertarian thought in the growing ICT industry that translated into the creation of WIRED magazine as the purveyor of this techno-economic orthodoxy (for more on this see Fred Turner’s brilliant book).

Martin offers the insight that the rather clunky, and somewhat messianic, narrative of the need for an external agent to intervene in a slow, inefficient, outmoded (and so on) sector, central to the disruptive technology spiel, allows sharp and charismatic entrepreneurs to step in as the pseudo-saviour, i.e. Sebastian Thrun of Udacity and others of his ilk. Criticism of the West Coast (capitalist) mythology is not new, of course, Richard Barbrook and Andy Cameron offered a critique of the ‘Californian Ideology‘ in the 1990s, and Stiegler has criticised the ‘American model’ of  laissez faire ‘cultural capitalism’ led by the ‘programming industries’ of new media (“functionally dedicated to marketing and publicity” [p. 5]) in his The Decadence of Industrial Democracies. Indeed, we can look back to Adorno and Horkheimer’s stinging critique of the Culture Industry as a formidable progenitor. What we can perhaps take from that line of argument is that arguing for a supposed ‘greater’ choice is actually a deception, the ‘choice’ is merely to consume more.

Others, who set themselves up as more thoughtful commentators have also weighed in on the side of the need for a disruption/revolution. Martin highlights that Clay Shirky has also parroted the, now well-worn, technological deterministic ark of argument. As with others, Shirky suggests that ‘education is broken‘ and must therefore be fixed by shiny new technology, in the form of MOOCs . Some proponents of this line of argument suggest that this would bring wider access to university level learning. There are a few (also well-worn but compelling) critiques of this line.

An obvious initial critique, as Martin argues in his blogpost, is that the ‘education is broken and so it requires a technological fix’ argument has gained so much traction because it is neat and easy to digest by journalists. A simple story with a clear solution is always going to trump the slightly messy, perhaps convoluted, and multiple stories that approximate the truth, for which there are unclear and troubling political solutions that require quite a lot of explanation and working through.

Furthermore, the existing evidence of engagement with MOOCs also somewhat contradicts the rosy picture painted by their evangelists. Completion rates for MOOCs tell a mixed story (as Martin has pointed out in other blogposts) and this perhaps speaks to the negotiation (by both students and course designers/leaders) of legitimacy and value for these courses – this is a sector still very much in flux. Those who are passionate about providing equal and wider access to university level education are torn by the desire to offer courses that open up (frequently excellent) materials for anyone to access but this is, of course, only a fraction of what we as university lecturers and students do when running and participating in courses.

We’re all, of course, increasingly proficient at consuming content online and MOOCs leverage that behaviour. What such systems are not so good at is providing something analogous to tutorials. The stand in for this is peer discussion/ support, which, of course, come with their own social and cultural issues around facilitation and particular participants becoming overbearing etc. So, these (socio-)technological fixes are not necessarily a like-for-like stand-in for all of those significant but hard to define benefits of university study within the physical context of an institution. Which is not to say that whatever MOOCs turn into cannot be of value, its just that its neither a direct alternative nor a replacement but rather a new/emerging form of pedagogical practice.

We can also look to the somewhat obvious Marxian critique of the constant clarion for technological revolution that, far from bringing in egalitarian and widespread access to a better form of living, education and so on, it ushers in the creation of a new proletarianised class of knowledge worker, trained, in this case, by machines (the machine learning version of xMOOCs is the example here) and held even further away from access to critical debate and the means of production.

After all, in a ‘mature’ market for technology, devices (and sometimes services) become cheap through mass production and availability. This slashes profit margins and consumer-users become savvy at backwards engineering and ‘modding’. Customers taking power into their own hands is rather undesirable for the corporate technology producer, unless they can co-opt those developments into the next iteration of the product. Thus, constant ‘innovation’ brings with it the maintenance of a premium for the ‘latest’, ‘must-have’ etc. device/service and necessarily excludes those who cannot pay.

One can easily imagine, then, how a stratification of the market would rapidly take hold. Cheaper, gigantic and formulaic courses (with automated marking of assessments) would be seen as lesser ‘products’ than more exclusive courses (with human tutor support). Those with power and money, in this case, would most-likely still send their children to (very expensive) physical universities with small classes, lots of attention from staff and all of the accoutrements of elite institutions.

Leaving that rather depressing argument aside, the framing of this form of consumer market for higher education is very Anglo-American, where degrees have already become a form of currency – for which there isn’t really an alternative. I cannot help wondering what other forms of education are being ignored (and therefore probably saved). The system of apprenticeship in Germany, for example, where more than half of school-leavers enter apprenticeships, which are really valued in society – with a majority of apprentices staying on with their host companies, is very successful and neither needs or could support a Silicon Valley style ‘disruption’.

Where does this leave us with regard to Stiegler’s argument that it is precisely the forms of collaboration that are opened up by digital media that can and should be used to transform higher education? Well, the innovative media supports being created in the guise of MOOCs and so on are neither the envisioned radical break(through) claimed in the silicon valley rhetoric or a pedagogical nosedive. As with all forms of technicity, MOOCs are pharmacological – they have the capacity to be both ‘poison’ and ‘cure’. If we take seriously Stiegler’s challenge that we need ‘to drive a dynamic for the rethinking of the relationship between knowledge and its media [supports] (of which MOOCs are a possible dimension) with the universities and academic institutions’ then we also need to take (very) seriously Martin’s arguments that designing open education courses/experiences is hard. I’m certainly not going to attempt to offer ‘easy’ or glib answers to such a problem here…

If we want the kind of collaborative learners that Stiegler gestures towards do we simply hope that they are self-selecting? Almost like postgraduate education is, with motivated students seeking out the opportunities to learn and contribute to the production of knowledge. That, of course, is a relatively small minority of the student population. Equally,we might consider the example of the proactive producers of peer-to-peer knowledge using platforms like Wikipedia, who are self-selecting and a minority relative to the number of ‘passive’ users of the platform. If the degree remains the only currency for employability within certain sectors and for particular kinds of roles then we retain the significant tension between the ideals of the pursuit and production of knowledge, traditionally at the heart of higher education, and the purchasing of a passport for employment (often in an unrelated field, probably in the financial sector) which the university degree has become in the UK.

It seems to me that it is not the university side of higher education that is broken in the UK (although it is always worthwhile striving for the ideals that underpin it), instead it is the preparation of skilled employment that was once provided by a valued system of apprenticeships and polytechnic institutions that has been not only broken but decimated. The renewal of these complimentary forms of further and higher education, with the new media supports we are using throughout all areas of life, seems to be an immediate and pressing concern.

Reblog > Drones, sport and ‘eventization’

Last month Patrick Crogan wrote a great, pithy blogpost about the conduct and conceptualisation of war in relation to the relentless gaze of drones arrayed with computer vision technologies that originate from professional sports video analysis. Folding together Derek Gregory’s recent detailed reading of Gregoire Chemayou’s ‘Theorie du Drone’, the work of the International Committee for Robot Arms Control and Bernard Stiegler’s theorisation of the industrialisation of memory, Patrick highlights how software systems embedded within the complex surveillance and attacking capabilities of drones that are becoming quasi-autonomous are operating in the very constitution of the events of war, not merely reacting or functioning as equipment, but proactively producing events. Reproduced below…

/////

This post is to start some ideas circulating from work I am increasingly becoming preoccupied with concerning military robotics and AI, as a particular (and also particularly important, in many ways) case of automatizing technologies emerging today. This is a big topic attracting an increasing amount of critical attention, notably from people like Derek Gregory (whose Geographical Imaginations blog is a treasure trove of insights, lines of inquiry and links on much of the work going on round this topic), and Lucy Suchman who is part of the International Committee for Robot Arms Control and brings a critical STS perspective to drones and robotics on her Robot Futures blog.

NFL

I’m reading French CNRS researcher Gregoire Chamayou’s Théorie du drone, a book which has made a powerful start on the task of philosophically (as he has it) interrogating the introduction of these new weapons systems which are transforming the conduct, conceptualisation and horizon of war, politics and the technocultural global future today. Many riches in there, but I just read (p. 61) that the U.S. Air Force Intelligence, Surveillance and Reconnaissance Agency, looking for ways to deal with the oceans of video data collected by drones constantly overflying territory with unblinking eyes, obtained a version of software developed by ESPN and used in their coverage of American football. The software provides for the selection and indexing of clips from the multiple camera coverage of football games to enable their rapid recall and use in the analysis of plays which (as anyone who watches NFL or College football coverage knows takes up much more time than the play itself in any given broadcast). The software is able to archive footage (from the current or previous games) in a manner that makes it immediately available to the program director in compiling material for comparative analysis, illustration of player performance or tactical/strategic traits of a team, etc. The player and the key play can be systematically broken down, tracked in time, identified as exceptional or part of a broader play style, and so forth.

These capacities are precisely what makes the software desirable to the US Air Force inasmuch as the strategic development of drone operations deals with effectively the same analytical problem: the player and the key play, the insurgent/terrorist and the key act (IED, ambush, etc).  The masses of video surveillance of the vast ‘gridded’ space of battlespace, a vast ‘arena’ similarly zoned in precisely measurable slices (but in 3D) must be selectable, taggable and recoverable in such a way to be usable in the review of drone operations. And the logic (or logistic as Virilio would immediately gloss it) of this treatment of ‘battlespace’ is realised in what has recently emerged unofficially from the Obama administration-Pentagon interface as the emerging strategic deployment of drones by the CIA (which runs a significant and un-reported proportion of drone operations globally). This targeting strategy is based precisely on pattern analysis both in tracking known suspected enemies of the state and in identifying what are called ‘signature targets’ (the signature referring to a ‘data signature’ of otherwise unidentified individuals, one that matches the movements and associations of a known insurgent/terrorist — see Gregory’s post on this in Geographic Imaginations ).

The ethical and juridical-political dimensions of this strategy are coming under increasing and much-needed scrutiny (more to come on this). As a media/games theorist, the striking thing about this felicitous mutuality of affordances between pro sport mediatisation technics and those in development for the conduct of drone operations is the reorientation to space it not only metaphorically suggests (war, become game now steering the metaphoric vehicle back in the other direction) but enacts through an ‘eventization’ (Stiegler) operating in the very constitution of the ‘event’ of war or counter-insurgency (or what James Der Derian called ‘post war warring’) . While there are many complicit actors benefiting from the profitable mediatized evolution of American football into a protracted, advertising friendly broadcast, no such ‘partnership’ exists between key players ‘on the ground’ and those re-processing their data trails.

A brief history of the future of pervasive media – Talk at the Pervasive Media Studio

I will be giving a talk at the Pervasive Media Studio on Friday 14th May entitled ‘A brief history of the future of pervasive media’, which is broadly derived from my PhD research. The talk will be open to the public, so please feel free to come along! Here’s the bumpf:

Pervasive media, and the various forms of computing from which they are derived, stem from a tradition of anticipating future scenarios of technology use. Sam Kinsley’s PhD research concerned the ways in which those involved in pervasive computing research and development imaginatively envision future worlds in which they’re technologies exist.

This lunchtime talk examines the ways in which future people, places and things are imagined in the research and development of pervasive media. Examples taken from prospective pervasive computing research and development in the last twenty years will be explored as emblematic of such future gazing. The aim is to provide a broad means of understanding the rationales by which technological futures are invoked so that pervasive media producers can critically reflect on the role the idea of the future in their work. Such an understanding is important because a history of computing is in large part a history of places and things that were never created – a history of yesterday’s tomorrows.

Ubiquitous Computing: Mark Weiser’s vision and legacy

This is a sub-section of the first chapter of my PhD thesis, its my attempt to reflect on Mark Weiser’s legacy in the field of ubiquitous computing.

2009 marked the tenth anniversary of the death of Mark Weiser, a man that many believe earned the title ‘visionary’. As a Principal Scientist and subsequently Chief Technology Officer at Xerox PARC, Weiser has been identified as the ‘godfather’ of ubiquitous computing (ubicomp). In the years since his demise many of the ideas that Weiser championed have come to greater prominence. As Yvonne Rogers points out this influence has been felt across industry, government and commercial research, from the European Union’s ‘disappearing computer’ initiative to MIT’s ‘Oxygen’, HP’s ‘CoolTown’ and Philips ‘Vision of the Future’. All of these projects aspired to Weiser’s tenet of the everyday environment and the objects within being embedded with computational capacities such that they might bend to our (human) will. Within the research community, as Genevieve Bell and Paul Dourish remark ‘almost one quarter of all the papers published in the ‘Ubicomp’ conference between 2001 and 2005 cite Weiser’s foundational articles’.

Continue reading Ubiquitous Computing: Mark Weiser’s vision and legacy

Ironic vision of augmented (hyper)reality

Timo Arnall points out this video, by a masters student(!), that depicts a slightly nightmarish, yet amusingly ironic, vision of a possible future world with augmented reality, whereby you earn money by subjecting yourself to advertising and depend upon instructions from the system for even basic tasks.

The latter half of the 20th century saw the built environment merged with media space, and architecture taking on new roles related to branding, image and consumerism. Augmented reality may recontextualise the functions of consumerism and architecture, and change in the way in which we operate within it.

A film produced for my final year Masters in Architecture, part of a larger project about the social and architectural consequences of new media and augmented reality.

Augmented (hyper)Reality by Keiichi Matsuda

[via Timo Arnall & Berg]

Social glue, or: What’s the ‘IMAP’ equivalent for social media?

The launch of Google Buzz has prompted me to raise some things that have been lurking in the back of my mind for some time. These thoughts began when the discussion about the ‘walled garden’ nature of facebook et al. emerged a couple of years ago and lead to the initiation of tentative steps towards interconnection and (that horribly overused word) ‘openness’ in the guise of ‘friend connect‘ and ‘facebook connect‘. Twitter was already sort of ahead of the game with their API, as the glut of applications for ‘tweeting’ attests.

Lots of talk on the interweb’s various locations for commentary centred on the social web, real-time web etc. being based in discrete platforms. This remains somewhat true today. We can certainly connect these services together and form extraordinary information gathering tools in the form of what Howard Rheingold usefully describes as ‘personal information dashboards’, using services such as netvibes and pipes in concert with the various APIs for the platforms we all use. However, this all takes quite a bit of effort at the moment [but! for a good tutorial, please check out Howard’s super videos: #1, #2, #3].

However, for the majority of internet users to usefully stick all of these various platforms and applications together there needs to be a much lower threshold of effort to achieve the desired results. Jyri Engstrom, co-founder of Jaiku and one of the big brains apparently behind ‘Buzz’, articulates the argument well here:

Most of the conversation over the last 24h has been centered around predicting if “Buzz will kill” this or that service. This debate starts from the assumption that Buzz and the rest of the social web are mutually exclusive. It’s arguably fair to assume so, considering all the social networks we’ve got so far are silos. To no longer assume everyone has to be using the same branded system to talk to each other is disruptive to the tech biz discourse, which is obsessed with turning everything into a war over which company is “the one”. So much so that the alternative is almost unthinkable. If the new standards succeed, in 2015 we’ll look back and shake our heads like we shake our heads today at the early days of proprietary phone networks and email systems. The thought that you couldn’t call, text or email people just because they happen to be on another phone operator or email client is laughable. Doubly so for the social Web. The reason many of the current commentators miss this point is that they are, in the immortal words of Walt Whitman, “demented with the mania of owning things.” (borrowing that quote from Doc Searls)

What are these ‘new standards’ then? Well, if we’re to take our cue from Google they consist of the development of the various existing data formats for syndication: extensions of Atom and RSS, such as activity streams and mediaRSS. There may well be families and hierarchies of such data formats and I’m sure hundreds, if not thousands, of developers are already working on creating these things. But I’m still left with this question: what if I don’t want my stuff (information, pictures, etc.) always held on servers owned by Google, facebook etc? What if I’m happy for such ‘stuff’ to be transient? Which of course such companies don’t want because your ‘stuff’ is incredibly valuable and they want to mine it for all its worth. Nevertheless, my half-formed thoughts are: what’s the equivalent to IMAP for social media?

To my mind, the missing ‘glue’ for the social networking ecosystem is the missing service architecture to allow all of us to host our own streams and tie together the various bits of our rapidly growing, perhaps increasingly ‘public’, ‘digital identity’. Social media could easily be distributed, just as blogs and ‘web 1.0’ are. What’s to stop a community creating something like wordpress or drupal for activity/social streams? If the standards suggested by Google really are that versatile then all that is necessary is to create a system that imports/exports using them. Search would be renewed in its importance, but companies/services like twitter could remain successful by facilitating that search functionality and helping users subscribe to one another’s feeds/streams.

A couple of years ago I thought about it in terms of a ‘meta-platform’ or ‘platform for platforms’, but we’ve kind of seen these, in the form of friendfeed and their ilk. Now I think, well, it could all still happen over port 80 with web traffic – it just needs an architecture to allow people to stick things up on their own servers and interconnect. If we take what Jyri says (above) seriously, it seems to me the logical step is to really set the social web ‘free’ and build the elements required to allow people to host their own activity streams. Maybe this is already happening. To build on Jyri’s theme of looking to a future and to paraphrase Alan Kay: “the best way to predict the future is to [build] it”. Go to it!!

Reflecting on Mark Weiser’s legacy ten years on

The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.

-Mark Weiser, 1991 “The Computer for the 21st Century” Scientific American

The goal is to achieve the most effective kind of technology, that which is essentially invisible to the user… I call this future world “Ubiquitous Computing” (Ubicomp).

-Mark Weiser, 1993 “Some Computer Science issues in Ubiquitous Computing” Communications of the ACM

2009 marks the tenth anniversary of the death of a man that many believe earned the title ‘visionary’, his name was Mark Weiser. As a Principal Scientist and subsequently Chief Technology Officer at Xerox PARC, Weiser is best known as the ‘godfather’ of ubiquitous computing. In the years since his demise many of the ideas that Weiser championed have come to greater prominence. As Yvonne Rogers points out this influence has been felt across industry, government and commercial research, from the EU’s ‘disappearing computer’ initiative to MIT’s ‘Oxygen’, HP’s ‘CoolTown’ and Phillips ‘Vision of the Future’. All of these projects aspired to Weiser’s tenet of the everyday environment and the objects within being embeded with computational capacities such that they might bend to our (human) will. Within the research community, as Bell and Dourish remark, ‘of the 108 papers comprising the Ubicomp conference proceedings between 2001 and 2004, fully 47% of the papers are oriented towards a proximate (and inevitable) technological future’ and ‘almost one quarter of all the papers published in the Ubicomp conference between 2001 and 2005 cite Weiser’s foundational articles’.

Continue reading Reflecting on Mark Weiser’s legacy ten years on

Neologism ~ “spectaculation”

Science Buzz! Flickr photo by Unhindered by Talent

I’m no fan of coining neologisms, but(!) I think I have a need for a word that pithily and succinctly allows me to cast mild derision at certain forms of speculation. It seems to be possible to carve out a career by publicising one’s work by stretching beyond the conventional limits of the remit of a particular project and making grand claims about ‘progress’. This is often identifiable by the monotonous use of phrases such as “in the future you/we will…”. Sometimes this is excusable, people get excitedly exuberant about their research and ideas (sometimes it’s done for you!), but other times it is clearly a deliberate tactic. Thus, I think we can describe what they’re up to as ‘spectaculation’. For it is not idle speculation but taking a speculative claim and widening its application, making it sound more important and thus more news-worthy i.e. spectacular. So we arrive at spectaculation, and of course somebody else (probably lots of people actually) has thought of this already (in a slightly different sense): credit where it’s due.

Image credit: Flickr user ‘Unhindered by Talent’.

Ubiquitous Computing video circa. 1991

“Coined by the Xerox Palo Alto Research Center’s (PARC) Computer Science Laboratory (CSL), [Ubiquitous Computing] describes a vision of the future. Just as electric motors have disappeared into the background of everyday life, PARC scientists envision a future where mobile computational devices will be similarly transparent. Potentially numbering the 100s per person these devices are nothing like those you use today. They are mobile. They know their location, and they communicate with their environment.”

I have no idea if I’m allowed to put this up but it seems a desperate shame that this video isn’t held in one complete file, easily accessible to the public and to researchers, given the historical significance of the work conducted on ubicomp at PARC by Mark Weiser et al. during hte late 80s early 90s. Please see the original files here: http://www.ubiq.com/hypertext/weiser/UbiMovies.html and read more about Mark Weiser by sticking his name in Google.

Please note that I had to edit out 2 minutes of the more technical stuff to get the video down to under 10mins.