It seems many theorists and practitioners are identifying a coming convergence of several seemingly unrelated technologies into a critical mass that will spawn a widespread revolution in our experience of space.
[Please refer to my Bibliography for the sources of quotes.]
As Lev Manovich explains:
Such a critical mass is made possible by the market economy. As technologies become more widespread they become mass-produced and thus become extremely cheap to produce. So cheap, in fact, that it has allowed numerous areas of research and development to become more economical. Where they were progressing at a painfully slow pace for years they are now accelerating because sufficient computation and communication capabilities have become affordable. These projects originated from different fields but are now converging on the same boundary between the actual and the virtual. (Rheingold, 2002: 84)
Although it is clear now that they are all related to the promotion and creation of pervasive hybrid spaces there are a bewildering number of labels for the research activities that are approaching the same goal, from a variety of angles. These include Ubiquitous Computing at Xerox PARC (and many others), Tangible Interfaces at the MIT MediaLab, Wearable Computing – principally at the University of Oregon, Context-aware Computing at the MIT MediaLab and the Georgia Institute of Technology and Smart Rooms/Objects, also at the MIT MediaLab.
Whilst there is an array of in-depth descriptions for every one of these research activities, highlighting their various nuances, it is possible to gather them all under the popular blanket term ‘Mixed Reality’. However, the ambiguity of a term based on a concept of reality could result in engaging in extensive philosophical discourse outside the bounds of this work. A term more specific to the exploration of hybrid spaces can be suggested as ‘Augmented Space’. Augmented Space, coined by Lev Manovich (2002), is in itself a derivative of another very specific field: Augmented Reality, which Manovich explains is opposed to ‘virtual reality’ (VR). With a typical VR system, all the work is done in a virtual space; physical space becomes unnecessary and its vision is completely blocked. In contrast, AR systems help the user to do the work in a physical space by augmenting it with additional information.
When mobile phone ownership grew faster than many expected, it became clear that there were definite commercial prospects for the development of ‘lifestyle’ products that could take advantage of society’s positive acceptance of living in Cellspace. It is important to note that ideas of human augmentation are not new; in fact Augmented Space research has produced ideas similar to those that helped spawn computer culture. Douglas Engelbardt envisioned a concept of a computer augmenting human intellect 40 years ago. However, Engelbardt’s ideas and the
related visions of Vannevar Bush assumed a stationary user – a scientist or engineer working in his office. Revolutionary for their time, these ideas anticipated the paradigm of desktop computing. Today, however, we are gradually moving into the next paradigm where computing and telecommunication are delivered to an untethered, mobile user. (Manovich, 2002)
The research departments and companies with a background in engineering that gave birth to our understanding of virtual worlds, through the invention of the personal computer and the Internet, are the institutions and companies that look to be shaping our emerging augmented space. One of the most prominent companies in helping realise our new spaces is Intel. In February 2002 the Chief Technology Officer of Intel announced that in the near future his company would include radio communications technology in every chip it manufactures.
Producing some of the most influential academics in the field of Augmented Space research, the MediaLab at MIT has a very clear understanding of where such research is heading. This is outlined by the Academic Head of the MediaLab, Alex Pentland, who identifies a deep divide between the world of bits, Virtuality, and the world of atoms, Actuality. Machines currently have no senses; they do not see nor hear. They are not aware of us unless we explicitly instruct them. Therefore only experts use machines to their full potential and even they must mediate such use through very specific programming languages. Pentland explains his research goals as merging the actual and virtual more closely by giving machines perceptual abilities. This might result in machines recognising faces, understanding when people are happy or sick and perceive common working environments. ‘Roughly, it is making machines know who, what, where, when and why, so that devices that surround us can respond more appropriately and helpfully’ (2004). Such research is producing ‘Smart Rooms’ and ‘Smart Clothes’ that are perceptually-aware and allow exploration of diverse applications in health care, entertainment and many other areas. It is probably still too early to tell whether or not such products will be successful, however one thing seems certain: there are many research groups both academic and commercial working towards the creation of devices that will create augmented spaces. We need no longer question how such technologies and spaces will be made but when.
We surely must expect profound movement in our perception of the spaces in which we live when such evolution of the spaces in which we live occurs. When spaces themselves respond to our constant interaction the mental construct that has remained relatively unchallenged for millennia will be drastically and irrevocably broken and recombined. Thus, in our cellspaces, we are in the first throes of a landslide of technological and social change that will gather momentum over the coming decade as more Augmented Space technologies emerge.
Posted by Sam at February 29, 2004 06:58 PM