Another blogpost where I’m just gonna splurge some links cos they’re just sitting as open tabs in my browser and I may as well park them and share them at the same time, in case anyone else is interested…
(If you’re somehow subscribed to this blog and don’t like this, let me know and I’ll see if I can set-up another means of doing this… I used to use del.icio.us, remember that?!)
Here’s some A.I. things from my browser then:
Adversarial attacks on machine learning
There’s been quite a bit of chat about the ways particular kinds of neural nets used in machine vision systems are vulnerable to techniques that either create mis-recognition of images or feed into training a mis-recognition.
danah boyd made this part of her public talks earlier this year, drawing upon a ‘shape bias’ study by Google researchers. Two recent overview pieces on The Verge and Quartz are accessible ways into such issues too.
Other stories on news sites (e.g.) have focussed on the ways machine vision systems that could be used in ‘driverless’ cars for recognising traffic signs can be ‘fooled’, drawing upon another study by researchers at four US institutions.
Another story doing the rounds has been the model of a 3D printed turtle that was used to fool what is referred to as “Google’s object detection AI” into classifying it as a gun, many of these accounts start with the same paper boyd cites move on to discuss work such as the ‘one pixel’ hack for confusing neural nets by researchers at Kyushu and then discuss a paper on the 3D printed turtle model as ‘adversarial object’ by researchers at MIT.
A Facebook spokesperson says the company is exploring securing against adversarial examples, shown through a research paper published in July 2017, but they apparently haven’t yet implemented anything. Google, where a number of the early ‘adversarial’ examples were researched, have apparently declined to comment on whether its APIs and deployed ‘AI’ are secured, but researchers there have recently submitted papers to conferences on the topic.
A reasonable overview of this kind of research is available on Popular Science by Dave Gershgorn: “Fooling The Machine“. Artist James Bridle (who else?!) has also written and made some provocative work in response to these kinds of issues, e.g. Autonomous Trap 001 and Austeer.
Biases and ethics of AI systems
There’s, of course, tons on the ways biases are encoded into ‘algorithms‘ and software but there’s been a little more attention to this sort of thing in relation to AI appearing in my social media stream this year…
Vice’s Motherboard covered a story concerning the ways in which a sentiment analysis system by Google appeared to classify statements about being gay or a jew as ‘negative’.
Sky News covered a story about an apparent case of erroneous arrests at the Notting Hill Carnival this year (2017) that were allegedly caused by facial recognition systems.
An interesting event at the Research and Development department at Het Nieuwe Instituut addresses ‘the ways that algorithmic agents perform notions of human race’. Decolonising Bots included Ramon Amaro, Florence Okoye and Legacy Russell.
The Financial Services Board have an interesting report out concerning: Artificial intelligence and machine learning in financial services, which seems well worth reading.
Defending corporate R&D in AI
Facebook’s head of AI is fed up with the negative, or apocalyptic, references used for describing AI, e.g. the terminator. It’s not just a whinge, there’s some interesting discussion in this interview on The Verge.
Technology policy pundit Andrea O’Sullivan says the U.S. needs to be careful not to hamstring innovation by letting ‘the regulators ruin AI‘.
Finally, the British Library have an event on Monday 6th November called: “AI: Where Science meets Science Fiction“, which may or may not be interesting… it will be live-streamed apparently.