Hyperland

glitches image of a 1990s NASA VR experience

A bit of nostalgia… ‘practising tomorrows‘ and all that.

Lots of things to crit with the benefit of hindsight, which I’m sure some folks did – I mean, the peculiar sort of aesthetic policing implied is funny and the fact that none of the folk used as talking heads can imagine a collaborative form of authorship is quite interesting. This programme came out in 1990, around the same time Berners Lee is pioneering the web – a rather different, perhaps more “interactive” vision of ‘multimedia’ – insofar as with the web we can all contribute to the creation as well as consumption of media [he writes in the dialog box of the “Add New Post” page of the WordPress interface]…

A slightly geeky thing I appreciate though is the very clear visual reference to the 1987 Apple Computer ‘video prototype’ called ‘Knowledge Navigator‘ (<–follow the link, third video down, see also), which I’m certain is deliberate.

‘Automated’ sweated labour

Charlie Chaplin in Modern Times

This piece by Sonia Sodha (Worry less about robots and more about sweatshops) in the Grauniad, which accompanies an episode of the Radio 4 programme Analysis (Who Speaks for the Workers?), is well worth checking out. It makes a case that seems to be increasing in consensus – that ‘automation’ in particular parts of industry will not mean ‘robots’ but pushing workers to become more ‘robotic’. This is an interesting foil to the ‘automated luxury communism’ schtick and the wider imaginings of automation. If you stop to think about wider and longer term trends in labour practices, it also feels depressingly possible…

This is the underbelly of our labour market: illegal exploitation, plain and simple. But there are other legal means employers can use to sweat their labour. In a sector such as logistics, smart technology is not being used to replace workers altogether, but to make them increasingly resemble robots. Parcel delivery and warehouse workers find themselves directed along exact routes in the name of efficiency. Wrist-based devices allow bosses to track their every move, right down to how long they take for lavatory breaks and the speed with which they move a particular piece of stock in a warehouse or from the delivery van to someone’s front door.

This hints at a chilling future: not one where robots have replaced us altogether, but where algorithms have completely eroded worker autonomy, undermining the dignity of work and the sense of pride that people can take in a job well done.

This fits well with complementary arguments about ‘heteromation‘ and other more nuanced understandings of what’s followed or extended what we used to call ‘post-Fordism’…

The “Ethics and Governance of Artificial Intelligence Fund” commits $7.6M to research “AI for the public interest”

A press release from the Knight Foundation and another from Omidyar Network highlight their joint effort with several other funders to commit $7.6M to be distributed across several international institutions to research “AI for the public interest”. This seems like an ambitious and interesting research programme, albeit located in the elite institutions one might unfortunately expect to hoover up this sort of funding… Nevertheless, it will be interesting to see what comes of this.

Here’s some snippets (see the full Knight Foundation PR here and the Omidyar PR here).

…a $27 million fund to apply the humanities, the social sciences and other disciplines to the development of AI.

The MIT Media Lab and the Berkman Klein Center for Internet & Society at Harvard University will serve as founding academic institutions for the initiative, which will be named the Ethics and Governance of Artificial Intelligence Fund. The fund will support a cross-section of AI ethics and governance projects and activities, both in the United States and internationally.

The fund seeks to advance AI in the public interest by including the broadest set of voices in discussions and projects addressing the human impacts of AI. Among the issues the fund might address:

  • Communicating complexity: How do we best communicate, through words and processes, the nuances of a complex field like AI?
  • Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?
  • Advancing accountable and fair AI: What kinds of controls do we need to minimize AI’s potential harm to society and maximize its benefits?
  • Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?
  • Expanding the table: How do we grow the field to ensure that a range of constituencies are involved with building the tools and analyzing social impact?

Supporting a Global Conversation

  • Digital Asia Hub (Hong Kong): Digital Asia Hub will investigate and shape the response to important, emerging questions regarding the safe and ethical use of artificial intelligence to promote social good in Asia and contribute to building the fund’s presence in the region. Efforts will include workshops and case studies that will explore the cultural, economic and political forces uniquely influencing the development of the technology in Asia.
  • ITS Rio (Rio de Janeiro, Brazil): ITS Rio will translate international debates on artificial intelligence and launch a series of projects addressing how artificial intelligence is being developed in Brazil and in Latin America more generally. On behalf of the Global Network of Internet and Society Research Center, ITS Rio and the Berkman Klein Center will also co-host a symposium on artificial intelligence and inclusion in Rio de Janeiro, bringing together almost 80 centers and an international set of participants to address diversity in technologies driven by artificial intelligence, and the opportunities and challenges posed by it around the world.

Tackling Concrete Challenges

  • AI Now (New York): AI Now will undertake interdisciplinary, empirical research examining the integration of artificial intelligence into existing critical infrastructures, looking specifically at bias, data collection, and healthcare.
  • Leverhulme Centre for the Future of Intelligence (Cambridge, United Kingdom): Leverhulme Centre for the Future of Intelligence will be focused on bringing together technical and legal perspectives to address interpretability, a topic made urgent by the European Union’s General Data Protection Regulation coming into force next year.
  • Access Now (Brussels, Belgium): Access Now will contribute to the rollout of the General Data Protection Regulation by working closely with data protection authorities to develop practical guidelines that protect user rights, and educate public and private authorities about rights relating to explainability. The organization will also conduct case studies on data protection issues relating to algorithms and artificial intelligence in France and Hungary.

Bolstering Interdisciplinary Work 

  • FAT ML (Global): FAT ML will host a researcher conference focused on developing concrete, technical approaches to securing values of fairness, accountability, and transparency in machine learning.
  • Data & Society (New York): Data & Society will conduct a series of ethnographically-informed studies of intelligent systems in which human labor plays an integral part, and will explore how and why the constitutive human elements of artificial intelligence are often obscured or rendered invisible. The research will produce empirical work examining these dynamics in order to facilitate the creation of effective regulation and ethical design considerations across domains.

##

About the Ethics and Governance of Artificial Intelligence Fund 

The Ethics and Governance of Artificial Intelligence Fund aims to support work around the world that advances the development of ethical artificial intelligence in the public interest, with an emphasis on applied research and education. The fund was launched in January 2017, with an initial investment of $27 million from the John S. and James L. Knight Foundation, Omidyar Network, LinkedIn founder Reid Hoffman, the Hewlett Foundation, and Jim Pallotta. The activities supported through the fund aim to address the global challenges of artificial intelligence from a multidisciplinary perspective—grounded in data, code, and academic analysis. The fund will advance the public understanding of artificial intelligence and support the creation of networks that span disciplines and topics related to artificial intelligence. The Miami Foundation is serving as fiscal sponsor for the fund.

Via Alexis Madrigral.

Algorithm 1986

Just cos it’s fun…

From the second edition of The Dictionary of Human Geography, written by Prof. Peter Gould:

A step-by-step procedure, usually supported by a formal mathematical proof, that leads to a desire solution. An example is the Simplex Algorithm in Linear Programming. Heuristic algorithms are not supported by formal proofs, but are highly likely to lead to the optimal solutions. Examples include finding multiple locations within, and the shortest paths through, a network. The word derives from the name Al-Ghorizmeh, a distinguished Arab geographer-mathematician of the sixth century A.D. (See also Districting Algorithm.)   PG

Slightly different from the two-sentence version by Prof. Ron Johnston in the 5th edition (2009).

How and why is children’s digital data being harvested?

Nice post by Huw Davies, which is worth a quick read (its fairly short)…

We need to ask what would data capture and management look like if it is guided by a children’s framework such as this one developed here by Sonia Livingstone and endorsed by the Children’s Commissioner here. Perhaps only companies that complied with strong security and anonymisation procedures would be licenced to trade in UK? Given the financial drivers at work, an ideal solution would possibly make better regulation a commerical incentive. We will be exploring these and other similar questions that emerge over the coming months.

“algorithmic governance” – recent ‘algorithm’ debates in geography-land

Over on Antipode’s site there’s a blog post about an intervention symposium on “algorithmic governance” brought together by Jeremy Crampton and Andrea Miller, on the back of sessions at the AAG in 2016. It’s good that this is available open access and, I hope, helpful that it maybe puts to bed some of the definition wrangling that has been the fashion. Obviously, a lot draws on the work of geographer Louise Amoore and also of political theorist Antoinette Rouvroy, which is great.

Reading through the overview and skimming the individual papers provokes me to comment that I remain puzzled though by the wider creeping use of an unqualified “non-human” to talk about software and the sociotechnical systems they run/are run on… this seems to play-down precisely the political issues raised in this particular symposium – that the kinds algorithms concerned in this debate are written and maintained by people, they’re not somehow separate or at a distance… It’s also interesting to note that a sizeable chunk of the debates concern ‘data’ but the symposium doesn’t have “data” in the title, but maybe ‘data–’ is passé… 🙂

I’ve copied below the intro to the post, but please check out the whole thing over on Antipode’s site.

Intervention Symposium: “Algorithmic Governance”; organised by Jeremy Crampton and Andrea Miller

The following essays first came together at the 2016 AAG Annual Meeting in San Francisco. Jeremy Crampton (Professor of Geography at the University of Kentucky) and Andrea Miller (PhD candidate at University of California, Davis) assembled five panellists to discuss what they call algorithmic governance – “the manifold ways that algorithms and code/space enable practices of governance that ascribes risk, suspicion and positive value in geographic contexts.”

Among other things, panellists explored how we can best pay attention to the spaces of governance where algorithms operate, and are contested; the spatial dimensions of the data-driven subject; how modes of algorithmic modulation and control impact understandings of categories such as race and gender; the extent to which algorithms are deterministic, and the spaces of contestation or counter-algorithms; how algorithmic governance inflects and augments practices of policing and militarization; the most productive theoretical tools available for studying algorithmic data; visualizations such as maps being implicated by or for algorithms; and the genealogy of algorithms and other histories of computation.

Three of the panellists plus Andrea and Jeremy present versions of these discussions below, following an introduction to the Intervention Symposium from its guest editors (who Andy and Katherine at Antipode would like to thank for all their work!).

Read the whole post and see the contributions to the symposium on the Antipode site.

Reblog> Workshop: Reshaping Cities through Data and Experiments

This looks interesting (via Programmable City):

Workshop: Reshaping Cities through Data and Experiments

When: 30th May 2017 – 9.30am to 3.30pm
Where: Maynooth University, Iontas Building, Seminar Room 2.31

The “Reshaping Cities through Data and Experiments” workshop is part of the Ulysses research exchange programme jointly funded by Irish Research Council and the Ambassade de France. It is organized in collaboration with researchers from the Centre de Sociologie de l’Innovation (i3-CSI) at the École des Mines in Paris – David Pontille, Félix Talvard, Clément Marquet and Brice Laurent – and researchers from the National Institute for Regional and Spatial Analysis (NIRSA) in Maynooth University, Ireland – Claudio Coletta, Liam Heaphy and Sung-Yueh Perng.

The aim is to initiate a transdisciplinary discussion on the theoretical, methodological and empirical issues related to experimental and data-driven approaches to urban development and living. This conversation is vital in a time when cities are increasingly turning into public-private testbeds and living labs, where urban development projects merge with the design of cyber-infrastructures to test new services and new forms of engagement for urban innovation and economic development. These new forms of interaction between algorithms, planning practices and governance processes raise crucial questions for researchers on how everyday life, civic engagement and urban change are shaped in contemporary cities.

Read the full blogpost on the Programmable City site.

An ancient twin? Facial pattern matching with ancient statues


The Musée de la Civilisation in Quebec have a exhibition about ancient ‘doubles’ or ‘twins’, as part of which you can submit your photo and a program will match your face with images of statues in the collection.

It’s been in the press and, of course, is ‘just a bit of fun’, but its also sort of interesting to submit images and try and work out how the pattern matching is working – it’s not all that obvious! There’s probably something smart to say about ‘algorithms’ here, but I’ve not had enough sleep… check it out for yourself: Mon Sosie À 2000 Ans.

Here’s me and Battataï:

Songs “written by AI” from SonyCSL

Songs written by Sony CSL’s “AI”…

From the Sony CSL “flow machines” website:

Flow Machines is a research project funded by the European Research Council (ERC) and coordinated by François Pachet (Sony CSL Paris – UMPC).

The goal of Flow Machines is to research and develop Artificial Intelligence systems able to generate music autonomously or in collaboration with human artists.
We do so by turning music style into a computational object. Musical style can come from individual composers, for example Bach or The Beatles, or a set of different artists, or, of course, the style of the musician who is using the system.

Their “Deep Bach” thing was doing the rounds at the end of last year, so I presume there will be more to come.

A Universe Explodes. A Blockchain book/novel

Thanks to Max Dovey for the tip on this…

This seems interesting as a sort of provocation about what Blockchain says/asks about ownership perhaps, although I’m not overly convinced by the gimmick of changing words such that the readers unravel, or “explode” the book… I wonder whether The Raw Shark Texts  or These Pages Fall Like Ash might be a deeper or maybe I mean more nuanced take on such things… however, I haven’t explored this enough yet and it’s good to see Google doing something like this (I think?!)

Here’s a snip from googler tea uglow’s medium post about this…

It’s a book. On your phone. Well, on the internet. Anyone can read it. It’s 20 pages long. Each page has 128 words, and there are 100 of the ‘books’ that can be ‘owned’ . And no way to see a book that isn’t one of those 100. Each book is unique, with personal dedications, and an accumulation of owners, (not to mention a decreasing number of words) as it is passed on. So it is both a book and an cumulative expression of the erosion of the self and of being rewritten and misunderstood. That is echoed in the narrative: the story is fluid, the transition confusing, the purpose unclear. The book gradually falls apart in more ways than one. It is also kinda geeky.