The AI Now Institute have published their second annual report with plenty of interesting things in it. I won’t try and summarise it or offer any analysis (yet). It’s worth a read:
The AI Now Institute, an interdisciplinary research center based at New York University, announced today the publication of its second annual research report. In advance of AI Now’s official launch in November, the 2017 report surveys the current use of AI across core domains, along with the challenges that the rapid introduction of these technologies are presenting. It also provides a set of ten core recommendations to guide future research and accountability mechanisms. The report focuses on key impact areas, including labor and automation, bias and inclusion, rights and liberties, and ethics and governance.
“The field of artificial intelligence is developing rapidly, and promises to help address some of the biggest challenges we face as a society,” said Kate Crawford, cofounder of AI Now and one of the lead authors of the report. “But the reason we founded the AI Now Institute is that we urgently need more research into the real-world implications of the adoption of AI and related technologies in our most sensitive social institutions. People are already being affected by these systems, be it while at school, looking for a job, reading news online, or interacting with the courts. With this report, we’re taking stock of the progress so far and the biggest emerging challenges that can guide our future research on the social implications of AI.”
There’s also a sort of Exec. Summary, a list of “10 Top Recommendations for the AI Field in 2017” on Medium too. Here’s the short version of that:
- 1. Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems.
- 2. Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design.
- 3. After releasing an AI system, companies should continue to monitor its use across different contexts and communities.
- 4. More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR.
- 5. Develop standards to track the provenance, development, and use of training datasets throughout their life cycle.
- 6. Expand AI bias research and mitigation strategies beyond a narrowly technical approach.
- 7. Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed.
- 8. Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development.
- 9. The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power.
- 10. Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms.
Which sort of reads, to me, as: “There should be more social scientists involved” 🙂