Category Archives: technology

Handbook on Geographies of Technology

Edward Elgar have a new title advertised: The Handbook on Geographies of Technology, edited by Prof. Barney Warf (Kansas). It is good to see such a volume is being produced and encouraging that someone as qualified as Prof. Warf is editing it. Having said that, there are two things I find disappointing from the information on the EE webpage for the book: 1) the price: £162 with a “web discount”. Why?! 2) the rather narrow range of topics and conceptual engagement – at least indicated by section and chapter titles. I understand that such a book cannot cover everything but only three chapters on conceptual issues seems a bit stingey (although if you only had three available they are well-chosen) and then the relatively traditional thematics chosen (comms., transport, energy, manufacturing and life science) seems like a bit of a missed opportunity. I look forward to seeing a copy, somehow (cos I cannot afford to buy one!!), to get a better sense of the contents. I hope there may be material in there to support my final year option module: Geographies of Technology.

Here’s the blurb:

This Handbook offers an insightful and comprehensive overview from a geographic perspective of the numerous and varied technologies that are shaping the contemporary world. It shows how geography and technology are intimately linked by examining the origins, growth, and impacts of 27 different technologies and highlighting how they influence the structure and spatiality of society. Following summaries of important conceptual issues such as diffusion, gender and science studies, the book explores various technologies from difference categories including computational, communications, transportation, energy, manufacturing and life sciences.

Talk: ‘imagining automation’, part of ‘Human centred design in the age of automation’

I’m doing a talk on 25th October for the ‘SW Futurists’ meet-up. The event is called “Human centred design in the age of automation” and I’ll be talking about how ‘automation’ gets imagined and what kinds of power or agency those imaginings might have.

There are two speakers, the other is Lucas Godfrey a design researcher at Edinburgh – who had a hand in inviting me to speak. 

There’s web page for the event on meetup.com with descriptions of both talks.

The blurb for my talk is here:

Imagining automation 
We are told that we face the widespread automation of jobs, that we are increasingly subject to ever more complex and intrusive ‘algorithmic’ surveillance and that complex computational systems, such as Google’s search, know us better than we know ourselves. What are the bases of such claims? How much do they say about the world we live in? and do such claims say more about the stories we tell about digital technologies? By talking through some examples, ranging from ill-fated chatbots to production lines and missions to Mars I invite you to consider with me some possible answers to these questions. In doing so, I hope we can reflect a little more on the power of the stories we tell about technology and how such power gets exercised.

N. Katherine Hayles on UAVs/drones as ‘cognitive assemblages’

In a recent article in Critical Inquiry, N. Katherine Hayles formulates an understanding of particular kinds of technological support as ‘cognitive assemblages’ (sort of following Deleuze & Guattari*). She takes as her particular case study the advent of swarming, quasi-autonomous UAVs/drones and their use in warfare. The final paragraph of the piece is interesting but, for me, raises as many questions as it seeks to answer:

Language, human sociality, somatic responses, and technological adaptations, along with emotion, are crucial to the formation of modern humans. Whether warfare should be added to the list may be controversial, but the twentieth and twenty-first centuries suggest that it will persist, albeit in modified forms. As the informational networks and feedback loops connecting us and our devices proliferate and deepen, we can no longer afford the illusion that consciousness alone steers our ships. How should we reimagine contemporary cognitive ecologies so that they become life enhancing rather than aimed toward dysfunctionality and death for humans and nonhumans alike? Recognizing the role played by non conscious cognitions in human/technical hybrids and conceptualizing them as cognitive assemblages is of course not a complete answer, but it is a necessary component. We need to recognize that when we design, implement, and extend technical cognitive systems, we are partially designing ourselves. We must take care accordingly. More accurate and encompassing views of how our cognitions enmesh with technical systems and those of other life forms will enable better designs, humbler perceptions of human roles in planetary cognitive ecologies, and more life-affirming practices as we move toward a future in which technical agency and autonomy become increasingly intrinsic to human complex systems.

(Hayles, 2016: 55)

A couple of immediate questions pop out for me:

  1. Could you get designers of UAVs to factor in affective responses without reducing them to something like galvanic skin response or another quantifiable measure (which Hayles critiques in relation to MIT Prof. Sandy Pentland’s proselytisation of a ‘sociometer‘)?
  2. What is meant by “accuracy” in that final sentence?! How might we qualify the idea of “better” designs? This seems to assert a kind of ethics (and maybe aesthetics) antithetical to the institutions and companies that make military equipment – by not addressing this, there seems to me to be a risk of simply making naive assertions.

I appreciate Hayles’ attempt to harness a broadly Deleuzian understanding of cognition (which might be understood as “affect theory”) in attending to pressing contemporary issues such as the rise of “killer robots” (or quasi-autonomous technological platforms that can inflict death), however – it seems to me that the paper uses the case studies (taken from other researchers’ work, such as Chamayou) to validate the theory, rather than using the theory to critically interrogate the empirical state of affairs. This, it seems to me, is a shame (not least because there’s a fruitful application of aspects of How we became posthuman here), and, as observed above, leaves more questions than answers. Maybe that’s productive, it can open debates – but others are doing slightly more to qualify how we might problematise ethics in this arena. I’d recommend taking a look at Lucy Suchman’s work, especially “robot futures” and the Campaign to Stop Killer Robots

Addendum: I’m not suggesting that Chamayou and other ‘droners’ are “right” and Hayles is somehow wrong… I’d definitely agree with Prof Louise Amoore who suggested on Twitter that those folk could do with reading Hayles’ (and Suchman’s) work

* I’m uncertain about the proposition of ‘cognitive assemblages’ – if we were to follow D&G’s theory of agencements would not all ‘assemblages’ be cognitive? The implication is, it seems, that the ‘cognitive’ in Hayles’ formulation is human cognition – which implies a human exceptionalism that might be seen as antithetical to D&G’s philosophy.

A dream of an algorithm – Agnieszka Zimolag

Photo by: Agnieszka Zimolag
Photo by: Agnieszka Zimolag

A compelling piece on the Institute of Network Cultures‘ “long forms” website by Agnieszka Zimolag entitled “A dream of an algorithm” explores “our” relation to “technology, or (to my mind) technics as the co-constitutional (and originary [not essential]) relation between what we call, in common sense terms, “the human” and”technology”.

Technology is my reflection. Just as if I never see myself unless I look in the mirror, the same goes for technology. Once I plug, once i turn on the devices, I look at my comforting reflection, I can finally become one with my own image. The perception of the self becomes a continuum, a reassurance of my own existence. Zimolag

It’s a beautiful essay and well worth reading/ looking at in full – the images are lovely and there’s some thoughtful prose reflecting upon this relation. I guess where I’d differ from Zimolag is her implied assertion that the technology, what it does/can do somehow comes after ‘the human’; that technology does something to “us” (humans) that alters us. I can see why, in the face of rapid technological change, we might figure things in this way. Nevertheless, I’m inclined to insist that there never was a “human” somehow separate from “technology”. We have an ‘aporetic origin’ in Derrida & Stiegler’s terms, we continually perform the coming to being of “the human”.

As I’ve observed elsewhere, the theory of technogenesis offered by a range of anthropologists and philosophers is the idea that humans and technology co-evolved together, that you do not get one without the other. Humans are irreducibly distinct because of the reflexive transmission of complex cultures made possible by technics and the ‘exteriorization’ of thought. Both Stiegler and Derrida argue that the mental interior is only recognized as such with the advent of the technical exterior: our conscious self-knowledge is only possible with the ability to exteriorize thought as a trace, commonly as language and gesture. Stiegler explains this aporia of origin as a paradox: ‘The paradox is to have to speak of an exteriorization without a preceding interior: the interior is constituted in exteriorisation’ (Stiegler, Technics and Time 1, p. 141).

Technics can be thought of as a technogenetic ‘double-bind’ between being both constitutive and a supplement of ‘the human’. The interior and exterior, and with them the contemporary understanding of the experience of being human and what we understand to be technology, are mutually co-constituted and continue to be so.

In opposing ‘technology’ and ‘the human’, the apparently immaterial ‘algorithm’ and apparently concrete human, we either oppose technically mediated experience to other forms of experience or we oppose our technical life to other, apparently ‘natural’, forms of exis- tence. Either way, we risk reasserting old, problematic binaries: human/technology and nature/society.

To bring this back to ‘algorithms’ and their devices, following Zimolag – they are not our ‘others’. these technologies discussed and depicted in Zimolag’s beautiful essay are materially of and with us, they are perhaps not so much the ‘reflection’ she asserts (above) but rather a mirror that we ourselves have crafted through which to look upon ourselves, but, precisely in doing so, the “we” is irrevocably altered to include the mirror. The knowledge born from coming to know our own images, in Zimolag’s metaphor, is irreducibly tied to technics.

A lack of politics in the geographies of code/software?

An interesting provocation for those who feel wedded to the ‘digital turn‘ from Mark Purcell on his blog… in particular:

the work, in general, seems to be quite aloof, or detached, or trying to stay above the fray, to remain non-committal, as though that were the more professional, academic stance to take.  All this detachment seems to have produced an upshot that is something like: “with all the new technologies coming into our lives in the past 10 years or so, it is important to think through their implications instead of just adopting them uncritically.”

Perhaps those that do “geography o[f] software/ information/ geodata” would like to respond..(?) For me, I think, there is simply a difference in focus between Purcell’s locating of politics and, for example – his colleague at Washington, Sarah Elwood’s in relation to “geodata” (e.g.), i.e. perhaps the difference between a politics of production as such and a politics of implementation.

Nevertheless, Purcell’s point about commons and peer production in open source software is valid – perhaps those involved in recent conference sessions on geographies of software have addressed these issues in some way? (I don’t know, I wasn’t there…)

Read Mark Purcell’s full blogpost on his blog.

“At play on the field of ghosts” – James Bridle on code/spaces of competitive sport

Reflecting upon the increasing instrumentation of the sporting field off play, for spectating, e.g. the ‘Hanwha chickens‘, and for the judgement of rules, e.g. ‘HawkEye’, James Bridle has written a nice piece on Medium about how the idea/ideal of ‘sport’ may be getting translated into something else…

This distinction between the actuality of the event and the fidelity of its recreation is narrow and could easily be dismissed as just another conjuration of spectacular TV coverage, were its remit limited to mere representation. But in the hyper-competitive domain of sports, lubricated with broadcasting and gambling dollars, recreation turns into prediction, and representation into judgement. The distinction between what is seen and what occurs becomes crucial.

More and more, the practice of human adjudication in sports is being crowded out by the supposed superiority of machine perception; a perception which is based on the recreation and prediction of real events, rather than their explicit witnessing. Since 2001, the Hawk-Eye computer system has become increasingly ubiquitous in major sporting competitions, combining machine vision with motion analysis to not only declare where precisely a ball touched or crossed a line, but where the ball would have gone if it were not rudely interrupted.

Behaviourism, productivity & the ‘quantified self’

metronome.gif

Steven Poole, in the New Statesman, on “How the craze for Apple Watches, Fitbits and other wearable tech devices revives the old and discredited science of behaviourism“:

Arguably, this authoritarian strand of behaviourist thinking is what morphed into the subtly reinforcing “choice architecture” of nudge politics, which seeks gently to compel citizens to do the right thing (eat healthy foods, sign up for pension plans) by altering the ways in which such alternatives are presented.

By contrast, the Apple Watch, the Pavlok and their ilk revive a behaviourism evacuated of all social concern and designed solely to optimise the individual customer. By ­using such devices, we voluntarily offer ourselves up to a denial of our voluntary selves, becoming atomised lab rats, to be manipulated electronically through the corporate cloud. It is perhaps no surprise that when the founder of American behaviourism, John B Watson, left academia in 1920, he went into a field that would come to profit very handsomely indeed from his skills of manipulation – advertising. Today’s neo-behaviourist technologies promise to usher in a world that is one giant Skinner Box in its own right: a world where thinking just gets in the way, and we all mechanically press levers for food pellets.

Echoes a common trope of theorising “subjectiv(is)ation” as a process (often labelled “neoliberal”, for good value) that gets done to a being/person… which gets entangled in an all-encompassing ‘labour theory of value’ but that’s a longer discussion…

CFP> “Countercultures of Data” – Special Issue of Philosophy & Technology

Another interesting CFP and goodness knows this needs doing(!) I look forward to seeing what comes of this, should be great!

CFP: “COUNTERCULTURES OF DATA” – SPECIAL ISSUE OF PHILOSOPHY & TECHNOLOGY

Call for Papers for Philosophy and Technology’s special issue on Countercultures of Data

Guest Editor – Anna Lauren Hoffmann, School of Information – University of California, Berkeley

Journal cover featuring white text on blue backgroundAbout the Issue
25 years ago, Sandra Harding—in her influential book Whose Science? Whose Knowledge? Thinking from Women’s Lives—detailed and extended critical debates surrounding knowledge production and practices in science and technology. Collectively, these “countercultures of science” confronted the “problematics, agendas, ethics, and consequences” of scientific and technological production head on. Today, these same perspectives offer insight into the realm of data science, as philosophers, scholars, and practitioners alike grapple with ethical questions in a world where discourse, design, and governance increasingly revolve around “big” data and quantifiable knowledge.

This special issue will bring together rigorous conceptual and theoretical perspectives on what might best be called—following Harding—emerging “countercultures of data.” In particular, this issue will further critical and philosophical thinking about the theories, methods, institutions, and technological arrangements that underwrite or support data science in various industries and forms. Combined, contributions to the special issue will put forward a more realistic assessment of possible futures for a data driven world.

We invite submissions related (but not limited) to:

– Race and Data Science
– Theories of Property, Labor, and Data
– Political Economies of Data
– Data and Imperialism
– Feminist Perspectives on Data Science
– Data, Bodies, and Disability
– Data, Infrastructure, and the Environment
– Data, Philosophy, and the Law
– Communities and Data
– Data and Queer Subjects
– Data and/as Human Subjects in Research
– Data Science and Epistemic Justice

Timetable for Submissions
October 24, 2016: Deadline for paper submissions
December 21, 2016: Deadline reviews papers
February 6, 2017: Deadline revised papers
2017: Publication of the special issue

Submission Details
To submit a paper for this special issue, authors should go to the journal’s Editorial Manager http://www.editorialmanager.com/phte/
The author (or a corresponding author for each submission in case of co- authored papers) must register into EM.

The author must then select the special article type: “COUNTERCULTURES OF DATA” from the selection provided in the submission process. This is needed in order to assign the submissions to the Guest Editor.

Submissions will then be assessed according to the following procedure:
New Submission => Journal Editorial Office => Guest Editor(s) => Reviewers => Reviewers’ Recommendations => Guest Editor(s)’ Recommendation => Editor-in-Chief’s Final Decision => Author Notification of the Decision. (The process will be reiterated in case of requests for revisions.)

About the Journal
The journal addresses the expanding scope and unprecedented impact of technologies, in order to improve the critical understanding of the conceptual nature and practical consequences, and hence provide the conceptual foundations for their fruitful and sustainable developments. The journal welcomes high-quality submissions, regardless of the tradition, school of thought or disciplinary background from which they derive. The journal’s Editor-in-Chief is Luciano Floridi (Oxford).

Contact
For any further information please contact: Anna Lauren Hoffmann – annalauren [at] berkeley [dot] edu