Are we all addicts now? Furtherfield 16 Sept – 12 Nov.

A still from the Black Mirror episode "the entire history of you"

This looks really interesting…

Are We All Addicts Now?

Furtherfield Gallery, 16 September – 12 November 2017.

Featuring Katriona Beales and Fiona MacDonald.

The exhibition and research project Are We All Addicts Now? explores the seductive and addictive qualities of the digital.

Artist Katriona Beales’ work addresses the sensual and tactile conditions of her life lived online: the saturated colour and meditative allure of glowing screens, the addictive potential of infinite scroll and notification streams. Her new body of work for AWAAN re-imagines the private spaces in which we play out our digital existence. The exhibition includes glass sculptures containing embedded screens, moving image works and digitally printed textiles. Beales’ work is complemented by a new sound-art work by artist and curator Fiona MacDonald : Feral Practice.

Beales celebrates the sensuality and appeal of online spaces, but criticises how our interactions get channeled through platforms designed to be addictive – how corporations use various ‘gamification’ and ‘neuro-marketing’ techniques to keep the ‘user’ on-device, to drive endless circulation, and monetise our every click. She suggests that in succumbing to online behavioural norms we emerge as ‘perfect capitalist subjects’.

For Furtherfield, Beales has constructed a sunken ‘bed’ into which visitors are invited to climb, where a glowing glass orb flutters with virtual moths repeatedly bashing the edges of an embedded screen. A video installation, reminiscent of a fruit machine, displays a drum of hypnotically spinning images whose rotation is triggered by the movement of gallery visitors. Beales recreates the peculiar, sometimes disquieting, image clashes experienced during her insomniac journeys through endless online picture streams – beauty products lining up with death; naked cats with armed police.


Entering the Machine Zone (2017) Katriona Beales

Glass-topped tables support the amorphous curves of heavy glass sculptures, which refract the multi-coloured light of tiny screens hidden inside. Visualisations of eye-tracking data (harvested live from gallery visitors) scatter across the ceiling. On the exterior wall of the gallery, an LED scrolling sign displays text Beales’ has compiled, based on comments from online forums about internet addiction.

Where Beales addresses the near-inescapability of machine-driven connection, Feral Practice draws us into the networks in nature. Mycorrhizal Meditation is a sound-art work for free download, accessed via posters in Furtherfield Gallery and across Finsbury Park. MM takes the form of a guided meditation, journeying through the human body and down into the ‘underworld’ of living soil, with its mycorrhizal network formed of plant roots and fungal threads. It combines spoken word and sound recordings of movement and rhythm made in wooded places. Feral Practice complicates the idea of nature as ‘ultimate digital detox’, and alerts us to the startling interconnectivity of beyond-human nature, the ‘wood-wide-web’ that pre-dates our digital connectivity by millennia. (Download Mycorrhizal Mediation here)

Reblog> Digital Neuroland

Rick Moranis in Ghostbusters with a brain scanner on his head

Via Tony Sampson. Freely available digital publication, follow the link

Digital Neuroland by Rizosfera

Very pleased to be part of this great series…

cover

Contents

Introduction by Rizosfera
Digital Neuroland. An interview with Tony D. Sampson  by Rizosfera
Contagion Theory Beyond the Microbe
‘Tarde as Media Theorist’: an interview with Tony D. Sampson by Jussi Parikka
Crowd, Power and Post-democracy in 21st Century by Obsolete Capitalism
Crowds vs publics, Ukraine vs Russia, the Gaza crisis, the contagion theory and netica – a dialogue with Tony D. Sampson by Rares Iordache

Reblog> Humans and machines at work

A warehouse worker and robot

Via Phoebe Moore. Looks good >>

Humans and Machines coverHumans and machines at work: monitoring, surveillance and automation in contemporary capitalism edited by Phoebe V. Moore, Martin Upchurch and Xanthe Whittaker.
This edited collection is now in production/press (Palgrave, Dynamics of Virtual Work series editors Ursula Huws and Rosalind Gill). This is the results of the symposium I organised for last year’s International Labour Process Conference (ILPC). We are so fortunate to have 9 women and 3 men authors from all over the world including Chinese University Hong Kong, Harvard, WA University St Louis, Milan, Sheffield, Lancaster, King’s College, Greenwich, and Middlesex researchers, two trade unionists from UNI Global Union and Institute for Employment Rights, early career and more advanced contributors.

In the era of the so-called Fourth Industrial Revolution, we increasingly work with machines in both cognitive and manual workplaces. This collection provides a series of accounts of workers’ local experiences that reflect the ubiquity of work’s digitalisation. Precarious gig economy workers ride bikes and drive taxis in China and Britain; domestic workers’ timekeeping and movements are documented; call centre workers in India experience invasive tracking but creative forms of worker subversion are evident; warehouse workers discover that hidden data has been used for layoffs; academic researchers see their labour obscured by a ‘data foam’ that does not benefit us; and journalists suffer the algorithmic curse. These cases are couched in historical accounts of identity and selfhood experiments seen in the Hawthorne experiments and the lineage of automation. This collection will appeal to scholars in the sociology of work and digital labour studies and anyone interested in learning about monitoring and surveillance, automation, the gig economy and quantified self in workplaces.

Table of contents:

Chapter 1: Introduction. Phoebe V. Moore, Martin Upchurch, Xanthe Whittaker

Chapter 2: Digitalisation of work and resistance. Phoebe V. Moore, Pav Akhtar, Martin Upchurch

Chapter 3: Deep automation and the world of work. Martin Upchurch, Phoebe V. Moore

Chapter 4: There is only one thing in life worse than being watched, and that is not being watched: Digital data analytics and the reorganisation of newspaper production. Xanthe Whittaker

Chapter 5: The electronic monitoring of care work – the redefinition of paid working time. Sian Moore and L. J. B. Hayes

Chapter 6: Social recruiting: control and surveillance in a digitised job market. Alessandro Gandini and Ivana Pais

Chapter 7: Close watch of a distant manager:  Multisurveillance by transnational clients in Indian call centres. Winifred R. Poster

Chapter 8: Hawthorne’s renewal: Quantified total self. Rebecca Lemov

Chapter 9: ‘Putting it together, that’s what counts’: Data foam, a Snowball and researcher evaluation. Penny C. S. Andrews

Chapter 10: Technologies of control, communication, and calculation: Taxi drivers’ labour in the platform economy. Julie Yujie Chen

Gilbreth’s motion studies films

Gilbreth motion studies light painting

Via Motion Pictures in the Human Sciences.
Parts of the lineage of some aspects of automation can be traced through this work:

“The Original Films of Frank Gilbreth”

The “industrial efficiency” expert Frank Gilbreth (1868-1924) conducted his motion studies, both in the United States and abroad, in factories, offices, hospitals and other workplaces between about 1910 and 1924. Gilbreth made his films with a 35 mm hand crank camera. The films below, despite their claim to be “original,” were actually compiled by James Perkins and Lillian Gilbreth after Gilbreth’s death in 1924. The original films are presumed lost. The above versions are silent and are from the Internet Archive; the Archives Center at the National Museum of American History has another, slightly different version, narrated by Lillian Gilbreth.

Our vascilating accounts of the agency of automated things

Rachael in the film Blade Runner

“There’s no hiding behind algorithms anymore. The problems cannot be minimized. The machines have shown they are not up to the task of dealing with rare, breaking news events, and it is unlikely that they will be in the near future. More humans must be added to the decision-making process, and the sooner the better.”

Alexis Madrigal

I wonder whether we have if not an increasing then certainly a more visible problem with addressing the agency of automated processes. In particular automation that functions predominantly through software, i.e. stuff we refer to as ‘algorithms’ and ‘algorithmic’, possibly ‘intelligent’ or ‘smart’ and perhaps even ‘AI’, ‘machine learning’ and so on.  I read three things this morning that seemed to come together to concretise this thought: Alexis Madrigal’s article in The Atlantic – “Google and Facebook have failed us“, James Somers’ article in The Atlantic – “The coming software apocalypse” and LM Sacacas’ blogpost “Machines for the evasion of moral responsibility“.

In Madrigal’s article we can see how the apparent autonomy of the ‘algorithm’ becomes the fulcrum around which machinations around ‘fake news’, in this case regarding the 2nd October 2017 mass shooting in Las Vegas. The apparent incapacity of an automated software system to perform the kinds of reasoning attributable to a ‘human’ editor is diagnosed on the one hand, and on the other the speed at which such breaking news events taking place and the volume of data being processed by ‘the algorithm’ led to Google admitting that their software was “briefly surfacing an inaccurate 4chan website in our Search results for a small number of queries”. Madrigal asserts:

It’s no longer good enough to shrug off (“briefly,” “for a small number of queries”) the problems in the system simply because it has computers in the decision loop.

In Somers’ article we can see how decisions made by programmers writing software that processed call sorting and volume for the emergency services in Washington State led to the 911 phone system being inaccessible to callers for six hours one night in 2014. As Somers describes:

The 911 outage… was traced to software running on a server in Englewood, Colorado. Operated by a systems provider named Intrado, the server kept a running counter of how many calls it had routed to 911 dispatchers around the country. Intrado programmers had set a threshold for how high the counter could go. They picked a number in the millions.

Shortly before midnight on April 10, the counter exceeded that number, resulting in chaos. Because the counter was used to generating a unique identifier for each call, new calls were rejected. And because the programmers hadn’t anticipated the problem, they hadn’t created alarms to call attention to it. Nobody knew what was happening. Dispatch centers in Washington, California, Florida, the Carolinas, and Minnesota, serving 11 million Americans, struggled to make sense of reports that callers were getting busy signals. It took until morning to realize that Intrado’s software in Englewood was responsible, and that the fix was to change a single number.

Quoting an MIT Professor of aeronautics (of course) Nancy Leveson, Somers observes: “The problem,” Leveson wrote in a book, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”

Michael Sacasas in his blogpost refers to Madrigal’s article and draws out arguments that the complex processes of software development, maintenance and the large and complicated organisations such as Facebook are open to those working there to work in a ‘thoughtless’ manner:

“following Arendt’s analysis, we can see more clearly how a certain inability to think (not merely calculate or problem solve) and consequently to assume moral responsibility for one’s actions, takes hold and yields a troubling and pernicious species of ethical and moral failures. …It would seem that whatever else we may say about algorithms as technical entities, they also function as the symbolic base of an ideology that abets thoughtlessness and facilitates the evasion of responsibility.”

The simplest version of what I’m getting at is this: on the one hand we attribute significant agency to automated software processes, this usually involves talking about ‘algorithms’ as quasi- or pretty much autonomous, which tends to infer that whatever it is we’re talking about, e.g. “Facebook’s algorithm”, is ‘other’ to us, ‘other’ to what might conventionally be characterised as ‘human’. On the other hand we talk about how automated processes can encode the assumptions and prejudices of the creators of those techniques and technologies, such as the ‘racist soap dispenser‘.

There’s a few things we can perhaps note about these related but potentially contradictory narratives.

First, they perhaps infer that the moment of authoring, creating, making, manufacturing is a one-off event – the things are made, the software is written and it becomes set, a bit like baking a sponge cake – you can’t take the flour, sugar, butter and eggs out again. Or, in a more nuanced version of this point – there is a sense that once set in train these things are really, really hard to change, which may, of course, be true in particular cases but also may not be a general rule. A soap dispenser’s sensor may be ‘hard coded’ to particular tolerances, whereas what gets called ‘Facebook’s algorithm’, while complicated, is probably readily editable (albeit with testing, version control and so on). This kind of narrative freights a form of determinism – there is an implied direction of travel to the technology.

Second, the kinds of automated processes I’m referring to here, ‘algorithms’ and so on, get ‘black boxed’. This is not only on the part of those who create, operate and benefit from those processes—like those frequently referred to Google, Facebook, Amazon and so on—but also in part by those who seek to highlight the black boxing. As Sacasas articulates: “The black box metaphor tries to get at the opacity of algorithmic processes”. He offers a quote from a series of posts by Kevin Hamilton which illustrates something of this:

Let’s imagine a Facebook user who is not yet aware of the algorithm at work in her social media platform. The process by which her content appears in others’ feeds, or by which others’ material appears in her own, is opaque to her. Approaching that process as a black box, might well situate our naive user as akin to the Taylorist laborer of the pre-computer, pre-war era. Prior to awareness, she blindly accepts input and provides output in the manufacture of Facebook’s product. Upon learning of the algorithm, she experiences the platform’s process as newly mediated. Like the post-war user, she now imagines herself outside the system, or strives to be so. She tweaks settings, probes to see what she has missed, alters activity to test effectiveness. She grasps at a newly-found potential to stand outside this system, to command it. We have a tendency to declare this a discovery of agency—a revelation even.

In a similar manner to the imagined participant in Searle’s “Chinese Room” thought experiment, the Facebook user can only guess at the efficacy of their relation to the black boxed process. ‘Tweaking our settings’ and responses might, as Hamilton suggest, “become a new form of labor, one that might then inevitably find description by some as its own black box, and one to escape.” A further step here is that even those of us diagnosing and analysing the ‘black boxes’ are perhaps complicit in keeping them in some way obscure. As Evan Selinger and Woodrow Hartzog argue: things that are obscure can be seen as ‘safe’, which is the principle of cryptography. Obscurity, for Selinger & Hartzog, “is a protective state that can further a number of goals, such as autonomy, self-fulfillment, socialization, and relative freedom from the abuse of power”. Nevertheless, obscurity can also be an excuse – the black box is impenetrable, not open to analysis and so we settle on other analytic strategies or simply focus on other things. A well-worn strategy seems to be to retreat to the ontological, to which I’ll return shortly.

Third, following from above, perhaps the ways in which we identify ‘black boxes‘ or the forms of black boxing we do ourselves over-simplifies or elides complexity. This is a difficult balancing act. A good concept becomes a short-hand that freights meaning in useful ways. However, there is always potential that it can hide as much as it reveals. In the case of the phenomena outlined in the two articles above, we perhaps focus on the ends, what we think ‘the algorithm’ does – the kinds of ‘effects’ we see, such as ‘fake news’ and the breakdown of an emergency telephone system, or even a ‘racist soap dispenser’. It is then very tempting to perform what Sally Wyatt calls a ‘justifactory’ technological determinism – not only is there a ’cause and effect’ but these things were bound to happen because of the kinds of technological processes involved. By fixing ‘algorithms’ as one kind of thing, we perhaps elide the ways in which they can be otherwise and, perhaps more seriously, elide the parts of the process of the development, resources, use and reception of those technologies and their integration into wider sociotechnical systems and society. These things don’t miraculously appear from nowhere – they are the result of lots of actions and decisions, some banal, some ‘strategic’, some with good intentions and some perhaps morally-questionable. By black boxing ‘the algorithm’, attributing ‘it’ with agency and making it ‘other’ to human activities we ignore or obscure the organisational processes that make it possible at all. I argue we cannot see these things as completely one thing or the other: the black boxed entity or the messy sociotechnical system, but rather as both and need to accommodate that sort of duality in our approaches to explanation.

Fourth, normative judgements are attached to the apparent agency of an automated system when it is perceived as core to the purpose of the business. Just like any other complicated organisation whose business becomes seen as a ‘public good’ (energy companies might be another example), competing, perhaps contradictory, narratives take hold. The purpose of the business may be to make money–in the case of Google and Facebook this is of course primarily through advertising, requiring attractive content to which to attach adverts–but the users perhaps consider their experience, which is ‘free’, more important. It seems to have become received wisdom that the very activities that drive the profits of the company, by boosting content that drives traffic and therefore serves more advertising and I assume therefore resulting in more revenue, run counter to accepted social and moral norms. This exemplifies the competing understandings of what companies like Google and Facebook do – in other words, what their ‘algorithms’ are for. This has a bearing on the kinds of stories we then tell about the perceived, or experienced, agency of the automated system.

Finally (for now), there is a tendency for academic social scientific studies of automated software systems to resort to ontological registers of analysis. There may be all sorts of reasons used as justification for this, such as specific detail of a given system is not accessible, or (quite often) only accessible through journalists, or the funding isn’t available to do the research. However, it also pays dividends to do ‘hard’ theory. In the part of academia I knock about in, geography-land and it’s neighbours, technology has been packaged up into the ‘non-human’ whereby the implication is that particular kinds of technology are entirely separate from us, humans, and can be seen to have ‘effects’ upon us and our societies. This is trendy cos one can draw upon philosophy that has long words and hard ideas in it, in particular: ‘object oriented ontology‘ (to a much lesser extent the ‘bromethean‘ accellerationists). The generalisable nature of ‘big’ theory is beguiling, it seems to permit us to make general, perhaps global, claims and often results in a healthy return in the academic currency of citations. Now, I too am guilty of resorting to theory, which is more or less abstract, through the work of Bernard Stiegler in particular, but I’d like to think I haven’t disappeared down the almost theological rabbit hole of trying to think objects in themselves through abstract language such as ‘units‘ or ‘allopoetic objects‘ and ‘perturbations’ of non-human ‘atmospheres’.

It seems to me that while geographers and others have been rightly critical of simplistic binaries of human/technical, there remains a common habit of referring to a technical system that has been written by and is maintained by ‘humans’ as other to whatever that ‘human’ apparently is, and to refer to technologically mediated activities as somehow extra-spatial, as virtual, in contra-distinction to a ‘real’. This is plainly a contradiction. On the one hand this positions the technology in question (‘algorithms’ and so on) as totally distinct from us, imbued with an ability to act without us and so potentially powerful. On the other hand if that technology is ‘virtual’ and not ‘real’ it implies it doesn’t count in some way. While in the late 90s and early 00s the ‘virtual’ technologies we discussed were often seen as somewhat inconsequential, the more contemporary concerns about ‘fake news’, malware and encoded prejudices (such as racism) have made automated software systems part of the news cycle. I don’t think it is a coincidence that we’ve moved from metaphors of liberty and community online to metaphors of ‘killer robots’, like the Terminator (of course there is a real prospect of autonomous weapons systems, as discussed elsewhere).

In the theoretical zeal of ‘decentering the human subject’ and focusing on the apparent alterity of technology, as abstract ‘objects’, we are at risk of failing to address the very concerns which are expressed in the articles by Madrigal and Somers. In a post entitled ‘Resisting the habits of the algorithmic mind‘, Sacasas suggests that automated software systems (‘algorithms’) are something like an outsourcing of problems solving ‘that ordinarily require cognitive labor–thought, decision making, judgement. It is these very activities–thinking, willing, and judging–that structure Arendt’s work in The Life of the Mind.’ The prosthetic capacity of technologies like software to in some way automate some of these processes might be liberating but they are also, as Sacasas suggests, morally and politically consequential. To ‘outsource the life of the mind’ for Sacasas means to risk being ‘habituated into conceiving of the life of the mind on the model of the problem-solving algorithm’. A corollary to this supposition I would argue is that there is a risk in the very diagnosis of this problem that we habituate ourselves to a determinism as well. As argued in the third point, above, we risk obscuring the organisational processes that make such sociotechnical systems possible at all. In the repetition of arguments that autonomous, ‘non-human’, ‘algorithms’ are already apparently doing all of these problematic things we will these circumstances upon ourselves. There is, therefore, an ethics to thinking about and analysing automation too.

Where does this leave us? I think it leaves us with some critical tools and tasks. We perhaps need not to shy away from the complexity of the systems we discuss – the ideas and words we use can do work for us, ‘algorithm’ for example freights some meaning, but we perhaps need to be careful we don’t obscure as much as we reveal. We perhaps need to use more, not fewer, metaphors. We definitely need more studies that get at the specificity of particular forms, processes and work of automation/automated systems. All of us, journalists and academics alike, need to perhaps use our words more carefully, or use more words to get at the issues.

Simply hailing the ‘rise of the robots’ is not enough. I think this reproduces an imagination of automation that is troubling and ought to be questioned (what I’ve called an ‘automative imaginary’ elsewhere, but maybe that’s too prosaic). For people like me in geography-land to retreat into ‘high’ theory and to only discuss abstract ontological/ metaphysical attributes of technology seems to me to be problematic and is a retreat from that part of the ‘life of the mind’ we claim to advance. I’m not arguing we need not retreat from theory we simply need to find a balance. A crucial issue for social science researchers of ‘algorithms’ and so on is that this sort of work is probably not the work of a lone wolf scholar, I increasingly suspect that it needs multi-disciplinary teams. It also needs to, at least in part, produce publicly accessible work (in all senses of ‘accessible’). In this sense work like the report on ‘Media manipulation and disinformation online‘ by Data & Society seems like necessary (but by no means the only) sorts of contribution. Prefixing your discipline with ‘digital’ and reproducing the same old theory but applied to ‘digital’ things won’t, I think, cut it.

Reblog> Idols of Silicon and Data

Deep Thought, Hitchhikers Guide to the Galaxy

From LM Sacasas:

Idols of Silicon and Data

In 2015, former Google and Uber engineer, Anthony Levandowski, founded a nonprofit called Way of the Future in order to develop an AI god and promote its worship. The mission statement reads as follows: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

A few loosely interconnected observations follow.

Read the full post.

Reblog > Smart Cities & Smart Citizenship event

Smart City visualisation

Via Peter-Paul Verbeek:

Smart Cities and Smart Citizenship

Oct 12, Free University Brussels:

Smart cities and the role of Internet of Things technologies in public space. How to understand these new human-technology relationships, and how to connect philosophical and ethical analysis to practices of urban planning, design and policymaking? Building on the theory of technological mediation, the lecture will investigate how technologies can contribute to the quality of life of (urban) citizens, create new political configurations, and require new forms of citizenship. We need to broaden the perspectives from which we look at the influence of the digital domain on the analogue city and bring ethical considerations to the center of the smart city and citizens debate. The event takes place at BOZAR (Ravensteinstraat 23, 1000 Brussel). It is a co-production of the Free University of Brussels with Brussels Academy and BOZAR Agora. More information: https://www.vub.ac.be/events/2017/peter-paul-verbeek-smart-cities-and-citizens-a-question-of-technology

“Racist soap dispenser” and artifactual politics

'Racist' soap dispenser

Some videos have been widely shared concerning the soap dispensers and taps in various public or restaurant toilets that appear to have been calibrated to work with light skin colour and so subsequently appear to not work with darker skin. See the below for a couple of example videos.

Of course, there are (depressingly) all sorts of examples of technologies being calibrated to favour people who conform to a white racial appearance, from the Kodak’s “Shirley” calibration cards, to Nikon’s “Did someone blink?” filter, to HP’s webcam face tracking software. There are unfortunately more examples, which I won’t list here, but to suffice it to say this demonstrates an important aspect of artefactual and technological politics – things often carry the political assumptions of their designers. Even if this was an ‘innocent’ mistake such as a result of a manufacturing error, skewing the calibration etc., it demonstrates the sense in which there remains a politics to the artefact/technology in question because the agency of the object remains skewed along lines of difference.

There are perhaps two sides to this politics, if we resurrect Langdon Winner’s (1980) well-known argument about artefactual politics and the resulting discussion. First, like the well-known story (cited by Winner, gleaned from Caro) of Robert Moses’ New York bridges“someone wills a specific social state, and then subtly transfers this vision into an artefact” (Joerges 1999: p. 412). What Joerges (1999) calls the design-led version of ‘artefacts-have-politics’, following Winner (I am not condoning Joerges’ rather narrow reading of Winner, just using a useful short-hand).

Second, following Winner, artefacts can have politics by virtue of the kinds of economic, political social (and so on) systems upon which they are predicated. There is the way in which such a deliberate or mistaken development, such as the tap sensor, is facilitated or at the least tolerated by virtue of the kinds of standards that are used to govern the design, manufacture and sale or implementation of a given artefact/technology. So, the fact that a bridge that apparently excludes particular groups on people by virtue of preventing their most likely means of travel, a bus, to pass under it, or a tap only works with lighter skin colour, can pass into circulation, or socialisation perhaps, by virtue of normative and bureaucratic frameworks of governance.

In this sense, and again following Winner, we might think about the ways these outcomes transcend “the simple categories of ‘intended’ and ‘unintended’ altogether”. Rather, they represent “instances in which the very process of technical development is so thoroughly biased in a particular direction that it regularly produces results heralded as wonderful breakthroughs by some social interests and crushing setbacks by others” (Winner 1980: p. 125-6)

So, even when considered the results of error, and especially when the mechanism for regulating such errors is considered to be ‘the market’—with the expectation that if the thing doesn’t work it won’t sell and the manufacturer will be forced to change it—the assumptions behind the rectification of the ‘error’ carry a politics too (perhaps in the sense of Weber’s loaded value judgements).

Third, there is the what Woolgar (1991 – in a critical response to Winner) calls the ‘contingent and contestable versions of the capacity of various technologies’, which might include the ‘manufacturing mistakes’ but would also include the videos produced and their support or contestation through responses in other videos and in media coverage.

This analysis might become further complicated by widening our consideration of the ways in which contingencies render a given artefact/ technology political.

Take, for example, an ‘Internet of Things’ device that might seem innocuous, such as a ‘smart thermostat’ that ‘learns’ when you use the heating and begins to automatically schedule your heating. There are immediate technical issues that might render such a device political, such as in terms of the strength of the security settings, and so whether or not it could be hacked and whether or not you as the ‘owner’ of the device would know and what you may be able to do in response.

Further, there are privacy issues if the ‘smart’ element is actually not embedded in the device but enabled through remote services ‘in the cloud’, do you know where your data is, how it is being used, does it identify you? etc. etc. Further still, the device might appear to be a one-off expense but may actually require a further payment or subscription to work in the way you expected. For example, I bought an Amazon Kindle that had advertising as the ‘screen saver’ and I had to pay an additional £10 to remove it.

Even further, it may be that even if the security, privacy and payment systems are all within the bounds of what one might consider to be politically or ethically acceptable, it may still be that there are political contingencies that exclude or disproportionately effect particular groups of people. The thermostat might only work with particular boilers or may require a ‘smart’ meter, so it may also only work with particular energy subscription plans. Such plans, even if they’re no more expensive might require good credit ratings to access them or other pre-conditions, which are not immediately obvious. Likewise, the thermostat may not work with pre-payment meter-driven systems, which necessarily disadvantages those without a choice – renting for example.

The thermostat may require a particular kind of smart phone to access its functionality, which again may require particular kinds of phone contract and these may require credit ratings and so on. The manufacturer of the thermostat might cease to trade, or get bought out, and the ‘smart’ software ‘in the cloud’ may cease to function – you may therefore find yourself without a thermostat. If the thermostat was installed in a ‘vulnerable’ person’s home in order to enable remote monitoring by concerned family members this might create anxiety and risk.

As apparently individual, or discrete, artefacts/technologies become apparently more entangled in sociotechnical systems of use (as Kline says) with concomitant contingencies the politics of these things has the potential to become more opaque.

So, all artefacts have politics and the examples within this post might be considered useful if troubling contemporary examples for discussion in research projects and in the classroom (as well as, one might hope, the committee rooms of regulators, or parliaments).

P.S. I think this now is a chunk of a lecture rewritten for my “Geographies of Technology” module at Exeter, heh.

Reblog> Angela Walch on the misunderstandings of blockchain technology

Blockchain visualisation

Another excellent, recent, episode of John Danaher’s podcast. In a wide-ranging discussion of blockchain technologies with Angela Walch there’s lots of really useful explorations of some of the confusing (to me anyway) aspects of what is meant by ‘blockchain’.

Episode #28 – Walch on the Misunderstandings of Blockchain Technology

In this episode I am joined by Angela Walch. Angela is an Associate Professor at St. Mary’s University School of Law. Her research focuses on money and the law, blockchain technologies, governance of emerging technologies and financial stability. She is a Research Fellow of the Centre for Blockchain Technologies of University College London. Angela was nominated for “Blockchain Person of the Year” for 2016 by Crypto Coins News for her work on the governance of blockchain technologies. She joins me for a conversation about the misleading terms used to describe blockchain technologies.

You can download the episode here. You can also subscribe on iTunes or Stitcher.

Show Notes

  • 0:00 – Introduction
  • 2:06 – What is a blockchain?
  • 6:15 – Is the blockchain distributed or shared?
  • 7:57 – What’s the difference between a public and private blockchain?
  • 11:20 – What’s the relationship between blockchains and currencies?
  • 18:43 – What is miner? What’s the difference between a full node and a partial node?
  • 22:25 – Why is there so much confusion associated with blockchains?
  • 29:50 – Should we regulate blockchain technologies?
  • 36:00 – The problems of inconsistency and perverse innovation
  • 41:40 – Why blockchains are not ‘immutable’
  • 58:04 – Why blockchains are not ‘trustless’
  • 1:00:00 – Definitional problems in practice
  • 1:02:37 – What is to be done about the problem?

Relevant Links

Another new book from Bernard Stiegler – Neganthropocene

Bernard Stiegler being interviewed

Open Humanities has a(nother!) new book from Bernard Stiegler, blurb pasted below. This is an edited version of Stiegler’s public lectures in various places over the last three or so years, hence Dan Ross’ byline. Dan has done some fantastic work of corralling the fast-moving blizzard of Stiegler’s concepts and sometimes flitting engagements with a wide range of other thinkers and I am sure that this book surfaces this work.

It would be interesting to see some critical engagement with this, it seems that Stiegler simply isn’t as trendy as Latour and Sloterdijk or the ‘bromethean‘ object-oriented chaps for those ‘doing’ the ‘anthropocene’ for some reason. I’m not advocating his position especially, I have various misgivings if I’m honest (and maybe one day I’ll write them down) but it is funny that there’s a sort of anglophone intellectually snobbery about some people’s work…

Neganthropocene

by Bernard Stiegler
Edited and translated by Daniel Ross

Forthcoming

As we drift past tipping points that put future biota at risk, while a post-truth regime institutes the denial of ‘climate change’ (as fake news), and as Silicon Valley assistants snatch decision and memory, and as gene-editing and a financially-engineered bifurcation advances over the rising hum of extinction events and the innumerable toxins and conceptual opiates that Anthropocene Talk fascinated itself with–in short, as ‘the Anthropocene’ discloses itself as a dead-end trap–Bernard Stiegler here produces the first counter-strike and moves beyond the entropic vortex and the mnemonically stripped Last Man socius feeding the vortex.

In the essays and lectures here titled Neganthropocene, Stiegler opens an entirely new front moving beyond the dead-end “banality” of the Anthropocene. Stiegler stakes out a battleplan to proceed beyond, indeed shrugging off, the fulfillment of nihilism that the era of climate chaos ushers in. Understood as the reinscription of philosophical, economic, anthropological and political concepts within a renewed thought of entropy and negentropy, Stiegler’s ‘Neganthropocene’ pursues encounters with Alfred North Whitehead, Jacques Derrida, Gilbert Simondon, Peter Sloterdijk, Karl Marx, Benjamin Bratton, and others in its address of a wide array of contemporary technics: cinema, automation, neurotechnology, platform capitalism, digital governance and terrorism. This is a work that will need be digested by all critical laborers who have invoked the Anthropocene in bemused, snarky, or pedagogic terms, only to find themselves having gone for the click-bait of the term itself–since even those who do not risk definition in and by the greater entropy.

Author Bio

Bernard Stiegler is a French philosopher who is director of the Institut de recherche et d’innovation, and a doctor of the Ecole des Hautes Etudes en Sciences Sociales. He has been a program director at the Collège international de philosophie, senior lecturer at Université de Compiègne, deputy director general of the Institut National de l’Audiovisuel, director of IRCAM, and director of the Cultural Development Department at the Centre Pompidou. He is also president of Ars Industrialis, an association he founded in 2006, as well as a distinguished professor of the Advanced Studies Institute of Nanjing, and visiting professor of the Academy of the Arts of Hangzhou, as well as a member of the French government’s Conseil national du numérique. Stiegler has published more than thirty books, all of which situate the question of technology as the repressed centre of philosophy, and in particular insofar as it constitutes an artificial, exteriorised memory that undergoes numerous transformations in the course of human existence.

Daniel Ross has translated eight books by Bernard Stiegler, including the forthcoming In the Disruption: How Not to Go Mad?(Polity Press). With David Barison, he is the co-director of the award-winning documentary about Martin Heidegger, The Ister, which premiered at the Rotterdam Film Festival and was the recipient of the Prix du Groupement National des Cinémas de Recherche (GNCR) and the Prix de l’AQCC at the Festival du Nouveau Cinéma, Montreal (2004). He is the author of Violent Democracy (Cambridge University Press, 2004) and numerous articles and chapters on the work of Bernard Stiegler.