The first instalment is: Arguing with theory.
This will be worth following!
Via Clive. This will be worth going to if you’re going to the AAG in 2018…
An interesting general (non-academic, non-technical) discussion about what “AI” is, what it means culturally and how it is variously thought about. Interesting to reflect on the way ideas about computation, “algorithms”, “intelligence” and so on play out… something that maybe isn’t discussed enough… I like the way the discussion turns around “thinking” and the suggestion of the word “reckoning”. Worth a listen…
The AI Now Institute have published their second annual report with plenty of interesting things in it. I won’t try and summarise it or offer any analysis (yet). It’s worth a read:
The AI Now Institute, an interdisciplinary research center based at New York University, announced today the publication of its second annual research report. In advance of AI Now’s official launch in November, the 2017 report surveys the current use of AI across core domains, along with the challenges that the rapid introduction of these technologies are presenting. It also provides a set of ten core recommendations to guide future research and accountability mechanisms. The report focuses on key impact areas, including labor and automation, bias and inclusion, rights and liberties, and ethics and governance.
“The field of artificial intelligence is developing rapidly, and promises to help address some of the biggest challenges we face as a society,” said Kate Crawford, cofounder of AI Now and one of the lead authors of the report. “But the reason we founded the AI Now Institute is that we urgently need more research into the real-world implications of the adoption of AI and related technologies in our most sensitive social institutions. People are already being affected by these systems, be it while at school, looking for a job, reading news online, or interacting with the courts. With this report, we’re taking stock of the progress so far and the biggest emerging challenges that can guide our future research on the social implications of AI.”
There’s also a sort of Exec. Summary, a list of “10 Top Recommendations for the AI Field in 2017” on Medium too. Here’s the short version of that:
- 1. Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems.
- 2. Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design.
- 3. After releasing an AI system, companies should continue to monitor its use across different contexts and communities.
- 4. More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR.
- 5. Develop standards to track the provenance, development, and use of training datasets throughout their life cycle.
- 6. Expand AI bias research and mitigation strategies beyond a narrowly technical approach.
- 7. Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed.
- 8. Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development.
- 9. The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power.
- 10. Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms.
Which sort of reads, to me, as: “There should be more social scientists involved” 🙂
A thought experiment based upon flippant suggestion:
Object Oriented Ontology is to philosophy what Uber is to tech development.
Both ‘disruptive’, Uber and OOO have both expanded beyond their initial context, which is by several measures ‘success’. Both have become like discursive shortcuts for a particular set of ideas – ‘gig economy’ and ‘automation’ for Uber and ‘speculative realism’ and maybe even ‘metaphysics’ for OOO (and there’s possibly other associations for these terms too).
Neither OOO or Uber came up with the ideas they propound first, they ‘innovated’ from others (not necessarily a problem) and then made grand claims based on that (maybe a problem).
Neither of the groups involved in the development of Uber or OOO has acted especially ethically, although Uber is almost certainly significantly worse (this isn’t a like-for-like comparison). This is one of the other ways in which these words have become pregnant with meaning. Uber has been variously documented as having a problem with misogyny in the workplace and has also teetered on the edge of legality through ‘greyball’. Some of the proponents of OOO have been accused of bullying graduate students online and at conferences (I recognise gossip can be pernicious but I’ve heard this from several unrelated sources). It has also been suggested some of these folks are garnering a reputation for being somewhat ‘macho’ in attitude – it probably doesn’t help that the lead figures are all male, that they write lots of earnest manifestos or that they succumb to profiles in newspapers that call them “philosopher prophet“. Of course, neither OOO or Uber are unique in this, similar observations/ accusations have been made of antecedent tech firms and philosophical movements, one need only look to TV programmes like “Silicon Valley” or open up the ‘theory boy‘ can of worms.
Finally, there is also a sense that the success both Uber and OOO are easily co-opted into these (pejorative) narratives. There are grounds for this, well – certainly for Uber, but the visibility that success brings makes it easier to tell these stories. I have no doubt that such alleged behaviour is not limited to those involved in Uber or OOO. Likewise, those categories may be contested and we shouldn’t tar everyone who works for a company or does a particular branch of theory with the same brush. Goodness knows there are plenty of “tech bros” and, for want of a better term, “theory bros” outside of Uber and OOO.
Such a critique, however flippant, can come across as a bit pompous or sly. I cannot stand outside this, I am, to a degree, complicit. For example, the citational practices used by “theory bros”, cartel-like, are easy to slip into – many of us have succumbed. To recognise stupidity, as both Ronell and Stiegler point out, is to recognise my own stupidity – the lesson, perhaps the ‘ethic’, is to pass through it towards knowledge. Not the reproduction of the same knowledge (that’s patriarchy), and not always, I think, difference for it’s own sake (isn’t that what the “tech bros” call “disruption”? and doesn’t that always require being in a privileged position?) but perhaps a thoughtful defiance – not ‘laughing along’. This could mean more “no’s” (following Sara Ahmed). Maybe even something like a NO movement – “No Ontology”, at least the kinds of ontology that get used as authority in the kinds of theory top trumps that get played by some of us in the social sciences and humanities… of course this isn’t a novel suggestion either, it’s somewhat akin to feminist standpoint theory.
Perhaps I’m being unkind to OOO and those who do/use it. Success breeds contempt and all that… but the thought experiment was interesting to run through, in my own ham-fisted way…
I’ve been thinking about the adverts from Barclays about ‘digital safety’ in relation to some of the themes of the module about technology that I run. In particular, about the sense of risk and threat that sometimes gets articulated about digital media and how this maybe carries with it other kinds of narrative about technology, like versions of determinism for instance. The topics of the videos are, of course, grounded in things that do happen – people do get scammed and it can be horrendous. Nevertheless, from an ‘academic’ standpoint, it’s interesting to ‘read’ these videos as particular articulations of perceived norms around safety/risk and responsibility, for whom, and why in relation to ‘digital’ media… I could write more but I’ll just post a few of the videos for now…