The IEEE Spectrum website reports ‘Computer Scientists Find Bias in Algorithms‘ [no! really?!], which makes me chuckle after (finally) recently getting down some thoughts about how we variously talk about and may study the things we call algorithms.
The following excerpt demonstrates the ways in which a particular understanding of acceptable knowledge (i.e. ‘science’, to the exclusion of other ways of knowing the world) and a kind of anthropomorphic slip reveal how even the idea of ‘bias’ here is firmly situated in a particular discursive regime:
Understanding how an algorithm becomes biased is fascinating, says Venkatasubramanian. The bias is germinated innocently enough within a simple processing system, and develops in a carefully controlled setting. But the self-improving nature of learning algorithms raises concern. Venkatasubramanian and his colleagues wonder whether we can ever trust the fairness of algorithms. To that end, they have begun stockpiling relevant information about where and how they go wrong. He also hopes that the study will help lawmakers understand how algorithms and big data should be treated in a legal case.
Many people believe that an algorithm is just a code, but that view is no longer valid, says Venkatasubramanian. “An algorithm has experiences, just as a person comes into life and has experiences.”