The critical writing on data science has taken the paradoxical position of insisting that normative issues pervade all work with data while leaving unaddressed the issue of data scientists’ ethical agency. Critics need to consider how data scientists learn to think about and handle these trade-offs, while practicing data scientists need to be more forthcoming about all of the small choices that shape their decisions and systems.
Technical actors are often far more sophisticated than critics at understanding the limits of their analysis. In many ways, the work of data scientists is a qualitative practice: they are called upon to parse an amorphous problem, wrangle a messy collection of data, and make it amenable to systematic analysis. To do this work well, they must constantly struggle to understand the contours and the limitations of both the data and their analysis. Practitioners want their analysis to be accurate and they are deeply troubled by the limits of tests of validity, the problems with reproducibility, and the shortcomings of their methods.
Many data scientists are also deeply disturbed by those who are coming into the field without rigorous training and those who are playing into the hype by promising analyses that are not technically or socially responsible. In this way, they should serve as allies with critics. Both see a need for nuances within the field. Unfortunately, universalizing critiques may undermine critics’ opportunities to work with data scientists to address meaningfully some of the most urgent problems.
From the Institute of Network Cultures:
This episode of the ‘Talking Politics‘ podcast is a conversation between LRB journalist John Naughton and the Oxford Internet Institute’s Professor Phillip Howard ranging over a number of issues but largely circling around the political issues that may emerge from ‘Internets of Things’ (the plural is important in the argument) that are discussed in Howard’s book ‘Pax Technica‘. Worth a listen if you have time…
One of the slightly throw away bits of the conversation, which didn’t concern the tech, that interested me was when Howard comments on the kind of book Pax Technica is – a ‘popular’ rather than ‘scholarly’ book and how that had led to a sense of dismissal by some. It seems nuts (to me, anyway) when we’re all supposed to be engaging in ‘impact’, ‘knowledge exchange’ and so on that opting to write a £17 paperback that opens out debate, instead of a £80+ ‘scholarly’ hardback, is frowned upon. I mean I understand some of the reasons why but still…
Quite by chance I stumbled across the twitter coverage of a UK Authority event entitled “Return of the Bots” yesterday. There were a range of speakers it seems, from public and private sectors. An interesting element was the snippets about the increasing use of process automation by the UK Government.
Here’s some of the tweets I ‘favourited’ for further investigation (below). I hadn’t quite appreciated where government had got to. It would be interesting to look into the rationale both central and local government are using for RPA – I assume it is cost-driven(?). I hope to follow up some of this…
— Helen Olsen Bedford (@helenolsen) November 14, 2017
“Within 3-5 yrs robotics will be commonplace in government” https://t.co/u8YiQLX7Ae
— Helen Olsen Bedford (@helenolsen) November 14, 2017
@cabinetofficeuk & @CapgeminiConsul partnership aims to take 1-2 years off time for UK #publicsector to adopt #robotics process automation https://t.co/OFVOsGNTlL #ReturnoftheBots pic.twitter.com/TcpnGJKacK
— UKAuthority (@UKAuthority) November 15, 2017
— Marcus Ferbrache (@marcusferbrache) November 14, 2017
Highlight 1. Prof Birgitte Andersen @BigInnovCentre suggests we should be thinking about ‘the new rules, norms and standards for delivering public services with regards to AI’ #AI #returnofthebots @UKAuthority pic.twitter.com/E8njgSa5aS
— Jon Robertson (@JonSRobertson) November 14, 2017
From the Programmable City team, looks interesting:
An interesting general (non-academic, non-technical) discussion about what “AI” is, what it means culturally and how it is variously thought about. Interesting to reflect on the way ideas about computation, “algorithms”, “intelligence” and so on play out… something that maybe isn’t discussed enough… I like the way the discussion turns around “thinking” and the suggestion of the word “reckoning”. Worth a listen…
The AI Now Institute have published their second annual report with plenty of interesting things in it. I won’t try and summarise it or offer any analysis (yet). It’s worth a read:
The AI Now Institute, an interdisciplinary research center based at New York University, announced today the publication of its second annual research report. In advance of AI Now’s official launch in November, the 2017 report surveys the current use of AI across core domains, along with the challenges that the rapid introduction of these technologies are presenting. It also provides a set of ten core recommendations to guide future research and accountability mechanisms. The report focuses on key impact areas, including labor and automation, bias and inclusion, rights and liberties, and ethics and governance.
“The field of artificial intelligence is developing rapidly, and promises to help address some of the biggest challenges we face as a society,” said Kate Crawford, cofounder of AI Now and one of the lead authors of the report. “But the reason we founded the AI Now Institute is that we urgently need more research into the real-world implications of the adoption of AI and related technologies in our most sensitive social institutions. People are already being affected by these systems, be it while at school, looking for a job, reading news online, or interacting with the courts. With this report, we’re taking stock of the progress so far and the biggest emerging challenges that can guide our future research on the social implications of AI.”
There’s also a sort of Exec. Summary, a list of “10 Top Recommendations for the AI Field in 2017” on Medium too. Here’s the short version of that:
- 1. Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems.
- 2. Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design.
- 3. After releasing an AI system, companies should continue to monitor its use across different contexts and communities.
- 4. More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR.
- 5. Develop standards to track the provenance, development, and use of training datasets throughout their life cycle.
- 6. Expand AI bias research and mitigation strategies beyond a narrowly technical approach.
- 7. Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed.
- 8. Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development.
- 9. The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power.
- 10. Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms.
Which sort of reads, to me, as: “There should be more social scientists involved” 🙂