Unfathomable Scale – moderating social media platforms

Facebook logo reflected in a human eye

There’s a really nice piece by Tarleton Gillespie in Issue 04 of Logic themed on “scale” that concerns the scale of social media platforms and how we might understand the qualitative as well as quantitative shifts that happen when things change in scale.

The Scale is just unfathomble

But the question of scale is more than just the sheer number of users. Social media platforms are not just big; at this scale, they become fundamentally different than they once were. They are qualitatively more complex. While these platforms may speak of their online “community,” singular, at a billion active users there can be no such thing. Platforms must manage multiple and shifting communities, across multiple nations and cultures and religions, each participating for different reasons, often with incommensurable values and aims. And communities do not independently coexist on a platform. Rather, they overlap and intermingle—by proximity, and by design.

The huge scale of the platforms has robbed anyone who is at all acquainted with the torrent of reports coming in of the illusion that there was any such thing as a unique case… On any sufficiently large social network everything you could possibly imagine happens every week, right? So there are no hypothetical situations, and there are no cases that are different or really edgy. There’s no such thing as a true edge case. There’s just more and less frequent cases, all of which happen all the time.

No matter how they handle content moderation, what their politics and premises are, or what tactics they choose, platforms must work at an impersonal scale: the scale of data. Platforms must treat users as data points, subpopulations, and statistics, and their interventions must be semi-automated so as to keep up with the relentless pace of both violations and complaints. This is not customer service or community management but logistics—where concerns must be addressed not individually, but procedurally.

However, the user experiences moderation very differently. Even if a user knows, intellectually, that moderation is an industrial-sized effort, it feels like it happens on an intimate scale. “This is happening to me; I am under attack; I feel unsafe. Why won’t someone do something about this?” Or, “That’s my post you deleted; my account you suspended. What did I do that was so wrong?”

Two excellent reflections on ethics in relation to Google’s AI principles

Holly from the UK TV programme Red Dwarf

Two excellent pieces by Anab Jain and Lucy Suchman that I recommend you read if you’re interested in studying technology (not just AI) that reflect upon Google’s announcement of its ‘AI principles’ and its apparent commitment not to work on the US Government’s “project Maven”.

Here’s a couple of quotes that stood out, but you should definitely read both pieces:

Corporate Accountability – Lucy Suchman

The principle “Be accountable to people,” states “We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.” This is a key objective but how, realistically, will this promise be implemented? As worded, it implicitly acknowledges a series of complex and unsolved problems: the increasing opacity of algorithmic operations, the absence of due process for those who are adversely affected, and the increasing threat that automation will translate into autonomy, in the sense of technologies that operate in ways that matter without provision for human judgment or accountability. […]

Tackling the Ethical Challenges of Slippery Technology – Anab Jain

The overriding question for all of these principles, in the end, concerns the processes through which their meaning and adherence to them will be adjudicated. It’s here that Google’s own status as a private corporation, but one now a giant operating in the context of wider economic and political orders, needs to be brought forward from the subtext and subject to more explicit debate.

Given the networked nature of the technologies that companies like Google create, and the marketplace of growth and progress that they operate within, how can they control who will benefit and who will lose? What might be the implications of powerful companies taking an overt moral or political position? How might they comprehend the complex possibilities for applications of their products? […]

How many unintended consequences can we think of? And what happens when we do release something potentially problematic out into the world? How much funding can be put into fixing it? And what if it can’t be fixed? Do we still ship it? What happens to our business if we don’t? All of this would mean slowing down the trajectory of growth, it would mean deferring decision-making, and that does not rank high in performance metrics. It might even require moving away from the selection of The Particular Future in which the organisation is currently finding success.

Automating inequality – Virginia Eubanks and interlocutors [video]

Still from George Lucas' THX1138

This Data & Society talk by Virginia Eubanks on her book Automating Inequality followed by a discussion with Alondra Nelson and Julia Angwin is excellent. This seems like vital empirical analysis and insights that flesh out what is, perhaps, frequently gestured towards by ‘critical algorithm studies’ folks – ‘auditing algorithms’, analysing what’s in the black box, how systems function and what is their material and socio-economic specificity and what then can we learn about how particular forms of actually existing automation (and not simply abstract ideals) function.

Eubanks talks for the first 20-ish minutes and then there’s a discussion that follows. This is really worth watching if you’re interested in doing algorithm studies type work and in doing ‘digital geographies’ that don’t simply lapse into ontology talk.

‘Pax Technica’ Talking Politics, Naughton & Howard

Nest - artwork by Jakub Geltner

This episode of the ‘Talking Politics‘ podcast is a conversation between LRB journalist John Naughton and the Oxford Internet Institute’s Professor Phillip Howard ranging over a number of issues but largely circling around the political issues that may emerge from ‘Internets of Things’ (the plural is important in the argument) that are discussed in Howard’s book ‘Pax Technica‘. Worth a listen if you have time…

One of the slightly throw away bits of the conversation, which didn’t concern the tech, that interested me was when Howard comments on the kind of book Pax Technica is – a ‘popular’ rather than ‘scholarly’ book and how that had led to a sense of dismissal by some. It seems nuts (to me, anyway) when we’re all supposed to be engaging in ‘impact’, ‘knowledge exchange’ and so on that opting to write a £17 paperback that opens out debate, instead of a £80+ ‘scholarly’ hardback, is frowned upon. I mean I understand some of the reasons why but still…

Through a data broker darkly…

Here’s an exercise to do, as a non-specialist, for yourself or maybe as part of a classroom activity: discuss what Facebook (data brokers, credit checkers etc etc.) might know about me/us/you, how accurate the data/information might be, and what that means to our lives.

One of the persistent themes of how we tell stories about the ‘information society’, ‘big data’, corporate surveillance and so on is the extent of the data held about each and every one of us. Lots of stories are told on the back of that and there are, of course, real life consequences to inaccuracies.

Nevertheless, an interesting way of starting the exercise above is to compare and contrast the following two articles:

Corporate Surveillance in Everyday Life:

The exploitation of personal information has become a multi-billion industry. Yet only the tip of the iceberg of today’s pervasive digital tracking is visible; much of it occurs in the background and remains opaque to most of us.

I Bought a Report on Everything That’s Known About Me Online:

If you like percentages, nearly 50 percent of the data in the report about me was incorrect. Even the zip code listed does not match that of my permanent address in the U.S.; it shows instead the zip code of an apartment where I lived several years ago. Many data points were so out of date as to be useless for marketing–or nefarious–purposes: My occupation is listed as “student”; my net worth does not take into account my really rather impressive student loan debt. And the information that is accurate, including my age and aforementioned net worth (when adjusted for the student debt), is presented in wide ranges.

Of course, it does not matter if the data is correct – those inaccuracies have real-world consequences, and the granularity of the accuracy only matters in certain circumstances. So, thinking about how and why the data captured about us matters, what it might facilitate – allow or prevent us or those around us doing seems like an interesting activity to occupy thirty minutes or so…

CFP> Creative propositions and provocations on the heritages of data-trade-place-value

Paula Crutchlow, with Ian Cook and I, invite submissions for the following session for this year’s RGS-IBG conference. Please do share this with anyone (doesn’t have to be geographers) who may be interested. As we say below, we welcome any kind of creative response to the theme. The session builds on Paula’s PhD project The Museum of Contemporary Commodities, which will be active before and throughout the conference in the RGS-IBG building.

Museum of Contemporary Commodities: creative propositions and provocations on the heritages of data-trade-place-value

How do we open out the messy digital geographies of trade, place and value to the world? How can we work with the digital beyond beyond archives, spectacle and techno-dystopian imaginations? How do we do so in a ways that are performative, collaborative and provocative of the digital?

This session builds on the planned hosting of the Museum of Contemporary Commodities (MoCC) in the RGS(IBG)’s Pavilion in the days leading up to the annual conference (and its partial installation in the RGS(IBG) building during the conference) where it will join the V&A, Science and Natural History Museums on London’s Exhibition Road. Developed as acts of valuing the things we buy today as the heritage of tomorrow, MoCC’s artworks take the form of dynamic, collaborative hacks and prototypes; socio-material processes, objects and events that aim to enrol publics in trade justice debates in light footed, life-affirming, surprising and contagious ways as part of their daily routines.

We invite prospective participants to offer propositions and provocations that stitch into or unpick the complex and sometime knotty patchwork quilt of data-trade-place-value. This is an invitation to contribute to and convene conversations that enliven geographical understandings of the governance, performance, placings and values/valuing of contemporary (digitally) mediated material culture. The resulting session is not conceived as a ‘conventional’ paper session. We invite submissions of ten-minute contributions that might take various forms, which might include essay, performance, video and many other creative responses to the theme.

This invitation should be understood in its broadest sense. We are interested in the commingling and mash-up of the theme(s) data-trade-place-value. We very much encourage submissions that push back against the normative authorities or discourses surrounding ‘the digital’ (however that might be conceived). So, we hope that all involved in the session will thereby be challenged and inspired by creative propositions and provocations that begin to get to the heart of how we open out the messy digital geographies of trade, place and value to the world.
Themes could include:

  • lively methods that work with and through participatory media
  • intimacy, humour, trust and the internet of things
  • mashups, subversions and hacks of big data from the bottom up
  • discourses and practises of future orientation and the spatial imaginations of ‘the digital’
  • an intersectional internet and the rise of ‘platforms’
  • alternative trade models, value systems and networked culture
  • DIWO (Do It With Others), scholar-activism & public pedagogy
  • the economic geographies of the battle for ‘open’
  • Please submit 250 word abstracts to us by email by 7 February and we will get back to you by 13 February.

Character assassinations? There’s an app for that…

Interesting article on the Washington Post  site, tweeted by David Murakami Wood (above), that talks about a service/app called “Peeple” that seeks to be a “Yelp for people” – to enable us to ‘rate’ and ‘review’ one another… A market-driven death-knell for treating one another like ‘people’ (rendering the name rather ironic) and another attempt to pull a further aspect of ‘ordinary’ life into the attention economy. This is an attempted renegotiation of the ‘normative’, what Daniel Miller and Sophie Woodward, in their book Blue Jeans, describe as the sense in which “the expectation that actions within a social field are likely to be judged as right or wrong, appropriate or inappropriate, proper or transgressive”. What if reviewing one another became ‘normal’..?(!!) *sigh*