‘Pax Technica’ Talking Politics, Naughton & Howard

Nest - artwork by Jakub Geltner

This episode of the ‘Talking Politics‘ podcast is a conversation between LRB journalist John Naughton and the Oxford Internet Institute’s Professor Phillip Howard ranging over a number of issues but largely circling around the political issues that may emerge from ‘Internets of Things’ (the plural is important in the argument) that are discussed in Howard’s book ‘Pax Technica‘. Worth a listen if you have time…

One of the slightly throw away bits of the conversation, which didn’t concern the tech, that interested me was when Howard comments on the kind of book Pax Technica is – a ‘popular’ rather than ‘scholarly’ book and how that had led to a sense of dismissal by some. It seems nuts (to me, anyway) when we’re all supposed to be engaging in ‘impact’, ‘knowledge exchange’ and so on that opting to write a £17 paperback that opens out debate, instead of a £80+ ‘scholarly’ hardback, is frowned upon. I mean I understand some of the reasons why but still…

Working with anxiety

People on a rollercoaster

Gillian Rose’s recently posted advice has sat with me for a few days, it’s been in the background of my thinking as I fail to get much proper work done. It’s got me thinking about the level of anxiety I have been working with, which I suspect is (sadly) not uncommon for many folk right now. I don’t know whether writing this is a good idea. It’s not like successful people write these sorts of admissions of weakness but there we go…

This is sort of public taking-stock, with the hope that it may help me along and maybe other folks who feel similar things may feel like they’re not alone, I don’t know…

Gillian Rose’s blogpost happened to come along at the same time I’ve been failing to write my first proper annual appraisal type document after passing through our formal probation system. Until that point I only had to do a much abridged version. Now I’m supposed to a full ‘PDR’, with six sections of a page each that ask you to provide an account of what you’re doing about: “Career Goals and Plans”, “Research and Scholarship”, “Impact”, “Education” (4th! *sigh*), “Internationalisation” and “Other significant contributions to the university” (not to your discipline, to scholarship, to academia, but to the institution – make of that what you will). I have been stuck. I was looking at the pages and while I have written some halfhearted bullet points about the few things I’ve done, it worried me to the extent that I kept putting it off.

The grown-up thing to do is, I think (and I’ve received advice about from someone I trust), to fill out these kinds of benchmarking/monitoring forms positively but realistically – trying to keep in mind an understanding of your own worth. So, not just chasing the targets (goodness knows there are plenty of them), but also politely saying where you could use more support to realise the things that are mutually beneficial – things you want to do and the university thinks are good too. In truth, my department is very supportive. The university has it’s jargon and paperwork but it has always been (for me anyway) mediated by good people. Nevertheless, I wasn’t taking my own advice. In part this is because I am tired (due to illness/ sleeplessness), in part because I’ve felt a bit lost with what to prioritise and what, following Gillian, my ‘brand’ is/ should be.

Drawing upon Gillian’s blogpost, I guess what I am reflecting upon then is how life-changes affect how you see yourself, how you plan and manage time, and how you judge what’s ‘good enough’ as Gillian put it. In the last three years, my time in my current position, with my family I’ve moved city to enable my commute, there’s been a very serious family illness, we became a ‘family’ – I have two children (15 months and nearly four years old[!!]), I have changed my working pattern (to compressed hours one week in four days) to enable childcare, and while I know I am incredibly lucky (I really do and remind myself of that frequently): I am exhausted.

This feeling of exhaustion is mostly due to non-work things, which I’d rather keep private, but there are work elements to that exhaustion. This comes from an accumulation of several factors that seem to play out in my mind regularly: Thinking about ‘keeping up’ – with targets, with debates, with expectations (FOMO, apparently). Trying to come up with ideas that don’t feel like they’re already being done, by those quicker to write. Attempting to be as supportive as possible to others whilst worrying about your own stalling career.

I have been feeling that I cannot seem to come up with a convincing narrative ~ what Gillian discussed as ‘brand’. I’ve tried a few times but I cannot seem to get momentum. This is where my anxiety lies. I do my best not to compare myself to others but what with social media, gossip and so on, it’s hard not to do so. I see others ‘networking’ but I feel less-confident about doing it. So I feel in a contradictory position: by most measures I am no longer ‘early career’, but I still feel like I haven’t really got started.

Where I’ve got to is this: I have a plan for a sort of narrative around automation. I know I am late to the party and this is already other people’s ‘brand’ but it’s what I’m interested in reading and thinking about. I have some ideas about how I can write about this in order for me to remain interested but also to meet expected targets. So, I know it’s not especially ‘ambitious’ or cutting edge or anything but that’s where I am.

I regret none of my choices and I am really thankful I had them. I have received support from colleagues and my institution to enable me to spend as much time as I can afford with my young children. I wouldn’t change that for anything. Nevertheless, I didn’t realise how ‘professionally’ anxious I would become. As things are settling down, in my new-ish work pattern, I feel like I am at a point of being able to prioritise more clearly.

It seems to me that Gillian’s closing remarks are really important – caring for yourself is crucial. I think you have to try and be kind to yourself as well as trying to be all of the other things. You may be thinking “it’s alright for you as: a man/ someone [with a ‘permanent’ contract]/[in the UK]/[in a ‘good’ department]”, and you are right but I can only be honest about how I’ve been feeling. I recognise I’m fortunate and I’ve tried to help others and make the most of that good fortune to the extent that I am able.

I hope that these reflections are in some way useful to someone. It may be unwise to write in this public confessional manner, and maybe I’m simply delivering myself to the ‘attention economy’.

Time to finish that form…

Reblog> (video): Gillian Rose – Tweeting the Smart City

Smart City visualisation

Via The Programmable City.

Seminar 2 (video): Gillian Rose – Tweeting the Smart City

We are delighted to share the video of our second seminar in our 2017/18 series, entitled Tweeting the Smart City: The Affective Enactments of the Smart City on Social Media given by Professor Gillian Rose from Oxford University on the 26th October 2017 and co-hosted with the Geography Department at Maynooth University.Abstract
Digital technologies of various kinds are now the means through which many cities are made visible and their spatialities negotiated. From casual snaps shared on Instagram to elaborate photo-realistic visualisations, digital technologies for making, distributing and viewing cities are more and more pervasive. This talk will explore some of the implications of that digital mediation of urban spaces. What forms of urban life are being made visible in these digitally mediated cities, and how? Through what configurations of temporality, spatiality and embodiment? And how should that picturing be theorised? Drawing on recent work on the visualisation of so-called ‘smart cities’ on social media, the lecture will suggest the scale and pervasiveness of digital imagery now means that notions of ‘representation’ have to be rethought. Cities and their inhabitants are increasingly mediated through a febrile cloud of streaming image files; as well as representing cities, this cloud also operationalises particular, affective ways of being urban. The lecture will explore some of the implications of this shift for both theory and method as well as critique.

John Danaher interview – Robot Sex: Social and Ethical Implications

Gigolo Jane and Gigolo Joe robots in the film A.I.

Via Philosophical Disquisitions.

Through the wonders of the modern technology, myself and Adam Ford sat down for an extended video chat about the new book Robot Sex: Social and Ethical Implications (MIT Press, 2017). You can watch the full thing above or on youtube. Topics covered include:

  • Why did I start writing about this topic?
  • Sex work and technological unemployment
  • Can you have sex with a robot?
  • Is there a case to be made for the use of sex robots?
  • The Campaign Against Sex Robots
  • The possibility of valuable, loving relationships between humans and robots
  • Sexbots as a social experiment

Be sure to check out Adam’s other videos and support his work.

Autumnal AI links

Facial tracking system, showing gaze direction, emotion scores and demographic profiling

Another blogpost where I’m just gonna splurge some links cos they’re just sitting as open tabs in my browser and I may as well park them and share them at the same time, in case anyone else is interested…

(If you’re somehow subscribed to this blog and don’t like this, let me know and I’ll see if I can set-up another means of doing this… I used to use del.icio.us, remember that?!)

Here’s some A.I. things from my browser then:

 

Adversarial attacks on machine learning

There’s been quite a bit of chat about the ways particular kinds of neural nets used in machine vision systems are vulnerable to techniques that either create mis-recognition of images or feed into training a mis-recognition.

danah boyd made this part of her public talks earlier this year, drawing upon a ‘shape bias’ study by Google researchers. Two recent overview pieces on The Verge and Quartz are accessible ways into such issues too.

Other stories on news sites (e.g.) have focussed on the ways machine vision systems that could be used in ‘driverless’ cars for recognising traffic signs can be ‘fooled’, drawing upon another study by researchers at four US institutions.

Another story doing the rounds has been the model of a 3D printed turtle that was used to fool what is referred to as “Google’s object detection AI” into classifying it as a gun, many of these accounts start with the same paper boyd cites move on to discuss work such as the ‘one pixel’ hack for confusing neural nets by researchers at Kyushu and then discuss a paper on the 3D printed turtle model as ‘adversarial object’ by researchers at MIT.

A Facebook spokesperson says the company is exploring securing against adversarial examples, shown through a research paper published in July 2017, but they apparently haven’t yet implemented anything. Google, where a number of the early ‘adversarial’ examples were researched, have apparently declined to comment on whether its APIs and deployed ‘AI’ are secured, but researchers there have recently submitted papers to conferences on the topic.

A reasonable overview of this kind of research is available on Popular Science by Dave Gershgorn: “Fooling The Machine“. Artist James Bridle (who else?!) has also written and made some provocative work in response to these kinds of issues, e.g. Autonomous Trap 001 and Austeer.

 

Biases and ethics of AI systems

There’s, of course, tons on the ways biases are encoded into ‘algorithms‘ and software but there’s been a little more attention to this sort of thing in relation to AI appearing in my social media stream this year…

Vice’s Motherboard covered a story concerning the ways in which a sentiment analysis system by Google appeared to classify statements about being gay or a jew as ‘negative’.

Sky News covered a story about an apparent case of erroneous arrests at the Notting Hill Carnival this year (2017) that were allegedly caused by facial recognition systems.

An interesting event at the Research and Development department at Het Nieuwe Instituut addresses ‘the ways that algorithmic agents perform notions of human race’. Decolonising Bots included Ramon Amaro, Florence Okoye and Legacy Russell.

 

The Financial Services Board have an interesting report out concerning: Artificial intelligence and machine learning in financial services, which seems well worth reading.

 

Defending corporate R&D in AI

Facebook’s head of AI is fed up with the negative, or apocalyptic, references used for describing AI, e.g. the terminator. It’s not just a whinge, there’s some interesting discussion in this interview on The Verge.

Technology policy pundit Andrea O’Sullivan says the U.S. needs to be careful not to hamstring innovation by letting ‘the regulators ruin AI‘.

 

Finally, the British Library have an event on Monday 6th November called: “AI: Where Science meets Science Fiction“, which may or may not be interesting… it will be live-streamed apparently.

Reblog> Robots: ethical or unethical?

Twiki the robot from Buck Rogers

From Peter-Paul Verbeek.

ROBOTS: ETHICAL OR UNETHICAL?

To highlight the relevance of the UNESCO World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) and to present its recent report on the ethics of robotics, the Permanent Delegation of the Kingdom of the Netherlands to UNESCO will organize a lunch debate on “Robots: ethical or unethical?” during the 39th General Conference, on Friday 10 November 2017, in Room X from 13:00 to 14:15.

The session will be opened by H.E. Ambassador Lionel Veer and Ms Nada Al-Nashif, Assistant Director-General for Social and Human Sciences, and will be moderated by Prof. Peter-Paul Verbeek, COMEST member and philosopher.

The issues addressed will be the following:

· What change caused by robots can we expect? Presented by Prof. Vanessa Evers, University of Twente

· Which are the ethical dilemmas? Presented by Prof. Mark Coeckelbergh, University of Vienna

· How can we ensure that innovation is ethical? Presented by Prof. Peter-Paul Verbeek, University of Twente (COMEST member)

The presentations will be followed by an interactive debate with the audience and by a reflection on the role of UNESCO and the COMEST on these issues.

Automation as received wisdom

Holly from the UK TV programme Red Dwarf

For your consideration – a Twitter poll in a sponsored tweet from one of the UK’s largest management consultancies.

Why might a management consultancy do this? – To gain superficially interesting yet fatuous data to make quick claims about? Perhaps for the purposes of advertising? Maybe… Perhaps to try to suggest, in a somewhat calculating way, that the company asks the “important” questions about the future and therefore imply they have some answers? Or maybe simply to boost the now-prevailing narrative that automation is widespread, growing and will take your job. Although to be fair to Accenture, that’s not what they ask.

In any case, this is not neutral – though, I recognise it’s a rather minor and perhaps inconsequential example. Nevertheless, it highlights the growth in pushing a narrative of automation from management consultancies, like Accenture, Deloitte and PWC, who are all writing lots of reports suggesting that companies need to be ready for automation.

A cynical analysis would suggest that it’s within the interests of such companies to jump on the narrative – it’s been in the press quite a bit in recent years, then ramp it up, and offer to sell the ‘solutions’.

What I find particularly interesting is that, while newspaper articles parrot the reports from these consultancies, there appears to be (in my digging around) scant serious evidence for this trend. A lot of it is based on economic modelling (of both past and future economic contexts) and some of the reports when they do list methods seem to use adapted versions of models that once said something else.

A case in point is the recent PWC report about automation that claimed up to 30% of UK jobs could be automated by the early 2030s, widely reported in the press (e.g. BBC, Graun, Telegraph), which was based upon a modified (2016) OECD model – the original model suggested that only 9% of jobs in OECD countries were at relatively high risk (greater than 70% risk in their calculation) of automation with the UK rated at just over 10% of jobs.

I’m working my way through this sort of stuff to get at how these sorts of narratives are generated, become received wisdom and feed into a wider form of social imagination about the kinds of socio-economic and technological future we expect. I’m hoping to pull together a book on this theme with the tentative title “The Automative Imagination”.