Getting in ‘the zone’: Luxury & Paranoia, Access & Exclusion – Capital and Public Space

Uber surge pricing in LA

Another interesting ‘long form’ essay on the Institute of Network Cultures site. This piece by Anastasia Kubrak and Sander Manse directly addresses some contemporary themes in geographyland – access, ‘digital’-ness, exclusion, ‘rights to the city’, technology & urbanism and ‘verticality’. The piece turns around an exploration of the idea of a ‘zone’ – ‘urban zoning’, ‘special economic zones’, ‘export processing zones’, ‘free economic/enterprise zones’, ‘no-go zones’. Some of this, of course, covers familiar ground for geographers but its interesting to see the argument play out. It seems to resonate, for example, with Matt Wilson’s book New Lines

Here’s some blockquoted bits (all links are in the original).

Luxury & Paranoia, Access & Exclusion On Capital and Public Space

We get into an Uber car, and the driver passes by the Kremlin walls, guided by GPS. At the end of the ride, the bill turns out to be three times as expensive than usual. What is the matter? We check the route, and the screen shows that we travelled to an airport outside of Moscow. Impossible. We look again: the moment we approached the Kremlin, our location automatically jumped to Vnukovo. As we learned later, this was caused by a GPS fence set up to confuse and disorient aerial sensors, preventing unwanted drone flyovers.

How can we benefit as citizens from the increase in sensing technologies, remote data-crunching algorithms, leaching geolocation trackers and parasite mapping interfaces? Can the imposed verticality of platform capitalism by some means enrich the surface of the city, and not just exploit it? Maybe our cities deserve a truly augmented reality – reality in which value generated within urban space actually benefits its inhabitants, and is therefore ‘augmented’ in the sense of increased or made greater. Is it possible to consider the extension of zoning not only as an issue, but also as a solution, a way to create room for fairer, more social alternatives? Can we imagine the sprawling of augmented zones today, still of accidental nature, being utilized or artificially designed for purposes other than serving capital?

Gated urban enclaves also proliferate within our ‘normal’ cities, perforating through the existing social fabric. Privatization of urban landscape affects our spatial rights, such as simply the right of passage: luxury stores and guarded residential areas already deny access to the poor and marginalized. But how do these acts of exclusion happen in cities dominated by the logic of platform capitalism? What happens when more tools become available to scan, analyze and reject citizens on the basis of their citizenship or credit score? Accurate user profiles come in handy when security is automated in urban space: surveillance induced by smart technologies, from electronic checkpoints to geofencing, can amplify more exclusion.

This tendency becomes clearly visible with Facebook being able to allow for indirect urban discrimination through targeted advertising. This is triggered by Facebook’s ability to exclude entire social groups from seeing certain ads based on their user profile, so that upscale housing-related ads might be hidden from them, making it harder for them to leave poorer neighborhoods. Meanwhile Uber is charging customers based on the prediction of their wealth, varying prices for rides between richer and poorer areas. This speculation on value enabled by the aggregation of massive amounts of data crystallizes new forms of information inequality in which platforms observe users through a one-way mirror.

If platform economies take the city as a hostage, governmental bodies of the city can seek how to counter privatization on material grounds. The notorious Kremlin’s GPS spoofing fence sends false coordinates to any navigational app within the city center, thereby also disrupting the operation of Uber and Google Maps. Such gaps on the map, blank spaces are usually precoded in spatial software by platforms, and can expel certain technologies from a geographical site, leaving no room for negotiation. Following the example of Free Economic Zones, democratic bodies could gain control over the city again by artificially constructing such spaces of exception. Imagine rigorous cases of hard-line zoning such as geofenced Uber-free Zones, concealed neighborhoods on Airbnb, areas secured from data-mining or user-profile-extraction.

Vertical zoning can alter the very way in which capital manifests itself. TheBristol pound is an example of city-scale local currency, created specifically to keep added value in circulation within one city. It is accepted by an impressive number of local businesses and for paying monthly wages and taxes. Though the Bristol Pound still circulates in paper, today we can witness a global sprawl of blockchain based community currencies, landing within big cities or even limited to neighborhoods. Remarkably, Colu Local Digital Wallet can be used in Liverpool, the East London area, Tel Aviv and Haifa – areas with a booming tech landscape or strong sense of community.

Ellen Ullman’s Life in Code

Interesting account of author of Close to the Machine Ellen Ullman’s most recent book Life in Code, which sounds fantastic and very much worth a read (just like Close to the Machine), and something of its context. From the NYT:

LIFE IN CODE

A Personal History of Technology

By Ellen Ullman

Illustrated. 306 pp. Farrar, Straus & Giroux.

As milestone years go, 1997 was a pretty good one. The computers may have been mostly beige and balky, but certain developments were destined to pay off down the road. Steve Jobs returned to a floundering Apple after years of corporate exile, IBM’s Deep Blue computer finally nailed the world-champion chess master Garry Kasparov with a checkmate, and a couple of Stanford students registered the domain name for a new website called google.com. Nineteen ninety-seven also happened to be the year that the software engineer Ellen Ullman published “Close to the Machine: Technophilia and Its Discontents,” her first book about working as a programmer in a massively male-dominated field.

That slender volume became a classic of 20th-century digital culture literature and was critically praised for its sharp look at the industry, presented in a literary voice that ignored the biz-whiz braggadocio of the early dot-com era. The book had obvious appeal to technically inclined women — desktop-support people like myself then, computer-science majors, admirers of Donna J. Haraway’s feminist cyborg manifesto, those finding work in the newish world of website building — and served as a reminder that someone had already been through it all and took notes for the future.

Then Ullman retired as a programmer, logging out to go write two intense character-driven thriller novels and the occasional nonfiction essay. The digital economy bounced back after the Epic Fail of 2000 and two decades later, those techno-seeds planted back in 1997 have bloomed. Just look at all those smartphones, constantly buzzing with news alerts and calendar notifications as we tell the virtual assistant to find us Google Maps directions to the new rice-bowl place. What would Ullman think of all this? We can now find out, as she’s written a new book, “Life in Code: A Personal History of Technology,” which manages to feel like both a prequel and a sequel to her first book.

Read the rest on the NYT website.

Reblog> Whither the Creative City? The Comeuppance of Richard Florida

Nice post from Jason Luger:

Whither the Creative City? The Comeuppance of Richard Florida

Talent, Technology, and Tolerance, said Florida (2002), were the pre-conditions for a successful urban economy. Florida’s ‘creative class’ theory, much copied, emulated and critically maligned, delineated urban regions with ‘talent’ (PhDs); ‘technology’ (things like patents granted); and ‘tolerance’ (represented by a rather arbitrary ‘gay index’ of same-sex households in census data).

This combination, according to Florida’s interpretation of his data, indicated urban creative ‘winners’ versus urban ‘losers’: blue collar cities with more traditional economies and traditional worldviews. Creative people want to be around other creative people, wrote Florida, so failing to provide an ideal urban environment for them will result in their ‘flight’ (2005) and the loss of all the benefits of the creative economy. Therefore, to win in the ‘new economy’ (Harvey, 1989), cities need to compete for, and win the affections of, the ‘creative class’. Or so Florida then-believed.

Read the full post.

Growing criticism of Stiegler

I’ve begun to see some interesting criticism of Bernard Stiegler’s more recent activities, both in terms of books and thePlaine Commune project, which I confess resonate with some misgivings I have had for a little while [e.g. Chatonsky, Moatti, SilberzahnVial]. No doubt there are responses to these criticisms and so I’m not going to attempt to reiterate them wholesale or vouch for them – not least because I haven’t read the more recent work. Let me be clear in this – I think critical reflection on an argument or project is not only healthy but a necessary part of the humanities and science. I am neither seeking to ‘write off’ Stiegler’s work nor dogmatically defend it.

Nevertheless, I think there are two particular, related, points that occur in a number of different criticisms online that it seems to me may hold some water. These are:

(1) Increasingly in recent work, which is being produced at a breakneck speed, there is a quickness with concepts that has drawn criticism. Howells and Moore (2013) in their introduction to the first anglophone secondary text on Stiegler’s work say: “In contrast to the patient, diligent deep-readings we have come to expect since Derrida, the image that emerges of Stiegler is perhaps a thinker who zooms in and out of texts, skimming them to fit what he is looking to find” (p. 9). Howells and Moore, in the end, see this as a strength. Nevertheless, there is a growing ‘jargon’ that loosens and perhaps undermines the analytical purchase within arguments Stiegler attempts to make. This is particularly evident in the kinds of names for the over-arching project that have begun to emerge, for example: “organology“, “neganthropology” and “pharmacosophy“. Furthermore, it has been argued that, especially in relation to the use of the ideas of “disruption” and “entropy”, Stiegler makes all-too-quick analogies and equivalences between different meanings or applications of a given term that waters down or perhaps even undermines its use. So, for example, in terms of the increasing use of philosophical concepts, for which he stands accused of creating ever-more impenetrable jargon, we might look to his recent conjugation of the ideas of automation, anthropocene and entropy/negentropy. In an extensive and rather forthright blogpost by Alexandre Moatti, an example is taken from The Automatic Society 1 (I’ve provided the full quote, whereas Moatti just takes a snippet):

“All noetic bifurcation, that is to say all quasi-causal bifurcation, derives from a cosmic potlatch that indeed destroys very large quantities of differences and orders, but  it does so by projecting a very great difference on another plane, constituting another ‘order of magnitude’ against the disorder of a kosmos in becoming, a kosmos that, without this projection of a yet-to-come from the unknown, would be reduced to a universe without singularity. A neganthropological singularity (which does not submit to any ‘anthropology’) is a negentropic bifurcation in entropic becoming, of which it becomes the quasi-cause, and therein a point of origin – through this improbable singularity that establishes it and from which, intermittently and incrementally, it propagates a process of individuation” [p. 246].

I cannot honestly say that I can confidently interpret or translate the meaning of this passage, perhaps someone will comment below with their version. However, I am confident that the translator (from French), Dan Ross, will have done a thorough job at trying to capture the sense of the passage as best he can. Nevertheless, and even with a knowledge of the various sources the terminology implies (made more or less explicit in the preceding parts of the book) the prose is perhaps problematic. Here’s my take, for what it’s worth:

What I think is being suggested in the quote above is that all life we call ‘human’ is supported by language, writing and other prostheses (‘noetic life’) and that when these social systems shift and are split (bifurcation) they destroy productive forms of  difference – different forms of understanding, different ways of knowing and different ways of living perhaps (‘differences and orders’). In so doing, this projects a bigger hiving off of the potential for life (‘difference on another plane’ and ‘disorder of a kosmos‘) to change in various ways and possibly prevents particular kinds of future (‘yet-to-come’ ~ quite similar to Derrida’s distinction between l’avenir and futur) and an impoverished form of being/life (‘a universe without singularity’). We can only really recognise these points of rupture after the fact, because the ruptures themselves are also the seeds of the moment of realisation. To positively and sustainably act (‘negentropic bifurcation in entropic becoming’) both for a positive projection of possible futures (‘improbable singularity’) and in response to these various kinds of ‘undoing’ of potential we attempt to create new possibilities (‘a process of individuation’).

Another related, perhaps more serious, criticism is that Stiegler quickly moves between and analogises things that might be considered to push the bounds of credulity. For example, the strategies of Daech/IS, management consultants and ‘GAFA’ (Google, Amazon, Facebook, Apple) are considered analogous by Stiegler when discussing ‘disruption’ as “a phenomenon of the acceleration of innovation, which is the basis of the strategy [of disruption] developed in Silicon Valley” [Here’s a strong response to that argument, in French]. Of course the idea of ‘disruptive practices’ as a force that can be diagnosed as ‘destroying social equilibrium; in different domains is seductive. Nevertheless, isn’t part of that seduction in an over-generalisation of ‘disruption’ that elides the mixing of people, objects and strategies that are simply too different? Isn’t there a danger that ‘disruption’ becomes yet another ‘neoliberalism’ – a catchall diagnosis of all things bad against which we should (ineffectually) rail? It sometimes seems to me that there is a peculiar, slightly snobby, anti-americanism that undergirds some of this, which if so does Stiegler’s work a disservice.

Taking this further, the analogies and the systemisation of the ‘jargon’-like concepts creates what Alexandre Moatti, in the Zilsel blogpost, calls two families of antonyms:

  • (automation) entropy, anthropisation, the Anthropocene era or the ‘Entropocene’, anthropocenologists
  • (deautomation) negentropy, neganthropy, neganthropisation and the ‘Neganthropocene’ era

There is a bit of a pattern here – to create systems of binaries. In the Disbelief and Discredit series it was Otium and Negotium. In Taking Care it was psycho- vs. noo-: politics, power, techniques and technologies. Perhaps this is what it means to put the pharmakon into practice for Stiegler? I worry that this habit of rendering systems of ideas that can be wielded authoritatively has the potential for a fairly negative implementation by ‘Stieglerian’ scholars – binary systems can be used as dogma. I think we’ve seen in enough of that in the social sciences to be wary here.

I recognise that sometimes ‘difficult’ language is needed to get at difficult ideas. There’s been previous controversy around these sorts of themes in anglophone scholarship in the past, not least in relation to the work of Judith Butler, and it’s possible to read about that elsewhere. I neither want to ‘throw stones’ nor ‘sit on a fence’ here, I admire aspects of Stiegler’s writing and thinking because it does, it seems to me anyway, get at some interesting and thorny issues. Nevertheless, the blizzard of concepts, the increasingly long and hard to follow sentences and I think the quickness of fairly sweeping arguments, especially when they write-off big chunks of other peoples’ work (which is the case in the chapter that passage is from in relation to Levi-Strauss), feels to me like a series of ungenerous moves. There may be all sorts of reasons for this but it feels like a shame to me…

(2) Following from (1), there is a sense in which the increasing use of philosophical jargon, analogies and the fairly rigid system of concepts that is used to interlink many of these themes creates what Moatti calls a kind of closed Stieglerian environment of thought, which is mutually reinforcing but perhaps then limits participation, through the jargon and activities of its thinking: “there is a Stieglerian environment: that of its association Ars Industrialis [and the Institute de Recherche et Innovation at the Centre Pompidou], of the summer academies it organizes in the country residence of Epineuil-le-Fleuriel… but also of certain contemporary authors whom he cites  and who gravitate around him (very often the same group)” [Moatti]. There is a danger that both in the systematisation of the concepts, as discussed above, and in the sort of informal cabal-like behaviour of the associations, groups and schools that there is a closed group.

There have also been concerns expressed about the nature of the support of the programmes and the work being undertaken by Ars Industrialis and IRI and how this feeds into the work itself. In particular, it has been noted that the corporate sponsorship (including Orange and Dassault Systems) of the Plaine Commune experiment [discussed in this interview] in producing a ‘contributory territory’, in the vein of Stiegler’s notion of the ‘economy of contribution‘, and the channeling of funds into some rather plum roles, such as the creation of a relatively well-paid and cooshy “Participatory Research Chair” (see: Faire de Plaine Commune en Seine-St-Denis) based at MSH Paris-Nord, in the territory, yes, but arguably not of it perhaps.

A corollary to this is the observation articulated by others, and something for which I think other philosophers have also been guilty, to draw upon a narrow and fairly specific set of studies to support sweeping generalisations. For Stiegler this has been the case in relation attention, with the use of one particular paper by N. Katherine Hayles, selectively drawing upon a particular scientific study; for ‘the anthropocene’, for which he significantly relies on Bonneuil and Fressoz (2016); and for automation, with the use of a particular quote by Bill Gates [1] and a widely cited but also contested speculative study by Frey and Osborne (2013) from the Oxford Martin School. This habit, in particular, I fear rather undermines the cogency of Stiegler’s arguments.

Where does this leave someone interested in these ideas? Or, more specifically, given this is a post on a personal blog through which I have significantly drawn upon Stiegler’s work: where does it leave my own interest? I don’t think I would go as far as some to declare that the recent work by Bernard Stiegler should be disregarded, or, in the extreme case, that Stiegler should no longer be considered worthy of the title ‘philosopher’ [See: Stéphane Vial’s Bernard Stiegler : la fin d’un philosophe]. Nevertheless, I think that, sadly, there probably is cause to be reticent in engaging with Stiegler’s work on ‘the anthropocene’ and ‘automation’ for all of the reasons discussed above. I still think there are plenty of interesting arguments and ideas expressed by Bernard Stiegler, I just think, personally, I will need to take much greater care in working through the provenance of these ideas from now on.

Notes

1. see the Business Insider article ‘Bill Gates: People Don’t Realize How Many Jobs Will Soon Be Replaced By Software Bots‘, quoting from the 2014 conversation with Bill Gates at the American Enterprise Institute: From poverty to prosperity: A conversation with Bill Gates [approx 46-minutes in]. Quote: ‘Capitalism, in general, will over time create more inequality and technology, over time, will reduce demand for jobs, particularly at the lower end of the skill set. “¦ Twenty years from now labour demand for lots of skill sets will be substantially lower and i don’t think people have that in their mental model’.

“Racist soap dispenser” and artifactual politics

'Racist' soap dispenser

Some videos have been widely shared concerning the soap dispensers and taps in various public or restaurant toilets that appear to have been calibrated to work with light skin colour and so subsequently appear to not work with darker skin. See the below for a couple of example videos.

Of course, there are (depressingly) all sorts of examples of technologies being calibrated to favour people who conform to a white racial appearance, from the Kodak’s “Shirley” calibration cards, to Nikon’s “Did someone blink?” filter, to HP’s webcam face tracking software. There are unfortunately more examples, which I won’t list here, but to suffice it to say this demonstrates an important aspect of artefactual and technological politics – things often carry the political assumptions of their designers. Even if this was an ‘innocent’ mistake such as a result of a manufacturing error, skewing the calibration etc., it demonstrates the sense in which there remains a politics to the artefact/technology in question because the agency of the object remains skewed along lines of difference.

There are perhaps two sides to this politics, if we resurrect Langdon Winner’s (1980) well-known argument about artefactual politics and the resulting discussion. First, like the well-known story (cited by Winner, gleaned from Caro) of Robert Moses’ New York bridges“someone wills a specific social state, and then subtly transfers this vision into an artefact” (Joerges 1999: p. 412). What Joerges (1999) calls the design-led version of ‘artefacts-have-politics’, following Winner (I am not condoning Joerges’ rather narrow reading of Winner, just using a useful short-hand).

Second, following Winner, artefacts can have politics by virtue of the kinds of economic, political social (and so on) systems upon which they are predicated. There is the way in which such a deliberate or mistaken development, such as the tap sensor, is facilitated or at the least tolerated by virtue of the kinds of standards that are used to govern the design, manufacture and sale or implementation of a given artefact/technology. So, the fact that a bridge that apparently excludes particular groups on people by virtue of preventing their most likely means of travel, a bus, to pass under it, or a tap only works with lighter skin colour, can pass into circulation, or socialisation perhaps, by virtue of normative and bureaucratic frameworks of governance.

In this sense, and again following Winner, we might think about the ways these outcomes transcend “the simple categories of ‘intended’ and ‘unintended’ altogether”. Rather, they represent “instances in which the very process of technical development is so thoroughly biased in a particular direction that it regularly produces results heralded as wonderful breakthroughs by some social interests and crushing setbacks by others” (Winner 1980: p. 125-6)

So, even when considered the results of error, and especially when the mechanism for regulating such errors is considered to be ‘the market’—with the expectation that if the thing doesn’t work it won’t sell and the manufacturer will be forced to change it—the assumptions behind the rectification of the ‘error’ carry a politics too (perhaps in the sense of Weber’s loaded value judgements).

Third, there is the what Woolgar (1991 – in a critical response to Winner) calls the ‘contingent and contestable versions of the capacity of various technologies’, which might include the ‘manufacturing mistakes’ but would also include the videos produced and their support or contestation through responses in other videos and in media coverage.

This analysis might become further complicated by widening our consideration of the ways in which contingencies render a given artefact/ technology political.

Take, for example, an ‘Internet of Things’ device that might seem innocuous, such as a ‘smart thermostat’ that ‘learns’ when you use the heating and begins to automatically schedule your heating. There are immediate technical issues that might render such a device political, such as in terms of the strength of the security settings, and so whether or not it could be hacked and whether or not you as the ‘owner’ of the device would know and what you may be able to do in response.

Further, there are privacy issues if the ‘smart’ element is actually not embedded in the device but enabled through remote services ‘in the cloud’, do you know where your data is, how it is being used, does it identify you? etc. etc. Further still, the device might appear to be a one-off expense but may actually require a further payment or subscription to work in the way you expected. For example, I bought an Amazon Kindle that had advertising as the ‘screen saver’ and I had to pay an additional £10 to remove it.

Even further, it may be that even if the security, privacy and payment systems are all within the bounds of what one might consider to be politically or ethically acceptable, it may still be that there are political contingencies that exclude or disproportionately effect particular groups of people. The thermostat might only work with particular boilers or may require a ‘smart’ meter, so it may also only work with particular energy subscription plans. Such plans, even if they’re no more expensive might require good credit ratings to access them or other pre-conditions, which are not immediately obvious. Likewise, the thermostat may not work with pre-payment meter-driven systems, which necessarily disadvantages those without a choice – renting for example.

The thermostat may require a particular kind of smart phone to access its functionality, which again may require particular kinds of phone contract and these may require credit ratings and so on. The manufacturer of the thermostat might cease to trade, or get bought out, and the ‘smart’ software ‘in the cloud’ may cease to function – you may therefore find yourself without a thermostat. If the thermostat was installed in a ‘vulnerable’ person’s home in order to enable remote monitoring by concerned family members this might create anxiety and risk.

As apparently individual, or discrete, artefacts/technologies become apparently more entangled in sociotechnical systems of use (as Kline says) with concomitant contingencies the politics of these things has the potential to become more opaque.

So, all artefacts have politics and the examples within this post might be considered useful if troubling contemporary examples for discussion in research projects and in the classroom (as well as, one might hope, the committee rooms of regulators, or parliaments).

P.S. I think this now is a chunk of a lecture rewritten for my “Geographies of Technology” module at Exeter, heh.

Kathi Weeks interview – Feminism & the refusal of work

Glitched Rosie the Riveter poster

Interesting interview with Kathi Weeks, whose book The Problem with Work is really good. Follow the link to the whole interview, but please find a snippet below:

Marxist feminists went a long way towards demystifying the so-called “private” practices, relations, and institutions.

…let me offer a crude but I think useful distinction between two periods of Marxist feminist work, one past and one present.
First the past. In the 1970s, Anglo-American Marxist feminists focused on mapping the relationship bewteen two systems of domination: capitalism and patriarchy.  One could characterize this phase as the attempt to bring a Marxist critique of work into the field of domestic labor and the familial relations of production. By examining domestic based caring work, housework, consumption work, and community-creation work as forms of reproductive labor upon which productive labor more narrowly conceived depends, and by viewing the household as a workplace and the family as a regime that organizes, distributes and manages that labor, Marxist feminists went a long way towards demystifying these so-called “private” practices, relations, and institutions.  On the one hand, they were concerned with the theoretical question of how to understand the relationship between capitalism and patriarchy: were they best conceived as two related systems or as one fully intertwined system?  On the the other hand, they were also focused on the closely related practical question of alliances: should feminist groups be autonomous from or integrated with other anticapitalist (and often antifeminist) movements?

Today we find ourselves in a different situation that holds new possibilities for the relationship between Marxism and feminism. Whereas 1970s feminists struggled to bring a Marxist analytic tailored to the study of waged labor to a very different kind of unwaged laboring practice that had not been considered part of capitalist production, today I think that in order to grasp new forms of waged work we need to draw on the older feminist analyses of waged and unwaged “women’s work.”

Some describe the present moment in terms of the “feminization of labor.”  It’s not my favorite term, but what I understand by it is a way to describe how in neoliberal post-Fordist economies more and more of waged jobs come to resemble traditional forms of feminized domestic work. This is particularly evident in the rise of precarious forms of low-wage, part-time, informal, and insecure forms of employment, and in the growth of service sector jobs that draw on workers’ emotional, caring, and communicative capacities that are undervalued and difficult to measure.

Feminist theory is no longer only optional for Marxist critique.

To confront this changing landscape of work, instead of using an unreconstructed Marxist analytic to study unwaged forms of domestic work, we need today to draw on Marxist feminist analyses of gendered forms of both waged and unwaged work for their insights into how these forms are exploited and how they are experienced. The practical implication of this is that, if we want to both understand and resist  contemporary forms of exploitation, Marxists can no longer remain ignorant of or separated from feminist theories and practices. As I see it, feminist theory is no longer optional for Marxist critique.

Read the whole interview on Political Critique

CFP: Workshop on Trustworthy Algorithmic Decision-Making

Not sure where I found this, but it may be of interest…

Workshop on Trustworthy Algorithmic Decision-Making
Call for Whitepapers

We seek participants for a National Science Foundation sponsored workshop on December 4-5, 2017 to work together to better understand algorithms that are currently being used to make decisions for and about people, and how those algorithms and decisions can be made more trustworthy. We invite interested scholars to submit whitepapers of no more than 2 pages (excluding references); attendees will be invited based on whitepaper submissions. Meals and travel expenses will be provided.

Online algorithms, often based on data-driven machine-learning approaches, are increasingly being used to make decisions for and about people in society. One very prominent example is the Facebook News Feed algorithm that ranks posts and stories for each person, and effectively prioritizes what news and information that person sees. Police are using “predictive policing” algorithms to choose where to patrol, and courts are using algorithms that predict the likelihood of repeat offending in sentencing. Face recognition algorithms are being implemented in airports in lieu of ID checks. Both Uber and Amazon use algorithms to set and adjust prices. Waymo/Google’s self-driving cars are using Google maps not just as a suggestion, but to actually make route choices.

As these algorithms become more integrated into people’s lives, they have the potential to have increasingly large impacts. However, if these algorithms cannot be trusted to perform fairly and without undue influences, then there may be some very bad unintentional effects. For example, some computer vision algorithms have mis-labeled African Americans as “gorillas”, and some likelihood of repeat offending algorithms have been shown to be racially biased. Many organizations employ “search engine optimization” techniques to alter the outcomes of search algorithms, and “social media optimization” to improve the ranking of their content on social media.

Researching and improving the trustworthiness of algorithmic decision-making will require a diverse set of skills and approaches. We look to involve participants from multiple sectors (academia, industry, government, popular scholarship) and from multiple intellectual and methodological approaches (computational, quantitative, qualitative, legal, social, critical, ethical, humanistic).

Whitepapers

To help get the conversation started and to get new ideas into the workshop, we solicit whitepapers of no more than two pages in length that describe an important aspect of trustworthy algorithmic decision-making. These whitepapers can motivate specific questions that need more research; they can describe an approach to part of the problem that is particularly interesting or likely to help make progress; or they can describe a case study of a specific instance in the world of algorithmic decision-making and the issues or challenges that case brings up.

Some questions that these whitepapers can address include (but are not limited to):

  • What does it mean for an algorithm to be trustworthy?
  • What outcomes, goals, or metrics should be applied to algorithms and algorithm-made decisions (beyond classic machine-learning accuracy metrics)?
  • What does it mean for an algorithm to be fair? Are there multiple perspectives on this?
  • What threat models are appropriate for studying algorithms? For algorithm-made decisions?
  • What are ways we can study data-driven algorithms when researchers don’t always have access to the algorithms or to the data, and when the data is constantly changing?
  • Should algorithms that make recommendations be held to different standards than algorithms that make decisions? Should filtering algorithms have different standards than ranking or prioritization algorithms?
  • When systems use algorithms to make decisions, are there ways to institute checks and balances on those decisions? Should we automate those?
  • Does transparency really achieve trustworthiness? What are alternative approaches to trusting algorithms and algorithm-made decisions?

Please submit white papers along with a CV or current webpage by October 9, 2017 via email to trustworthy-algorithms@bitlab.cas.msu.edu. We plan to post whitepapers publicly on the workshop website (with authors’ permission) to facilitate conversation ahead of, at, and after the workshop. More information about the workshop can be found at http://trustworthy-algorithms.org.

We have limited funding for PhD students interested in these topics to attend the workshop. Interested students should also submit a whitepaper with a brief description of their research interests and thoughts on these topics, and indicate in their email that they are PhD students.

Thinking in public – Baskin on Wurgaft

Bernard Henri Levi hit with a custard pie

From an interesting review of Benjamin Aldes Wurgaft’s “Thinking in Public, Straus, Levinas, Arendt” by Jon Baskin. Found via Anne Galloway.

Arendt, Wurgaft suggests, may remain important today less for her writing on totalitarianism than for her warnings about the rise of the “technocrats” – a new breed of “intellectuals” who pictured political life as involving the accomplishment of pre-established tasks, rather than as an ongoing argument involving perennial questions about what we value, and why.

The technocrats, undoubtedly, are still with us. At one point in his article, Wurgaft cites a widely praised review of Daniel Drezner’s recent book, The Ideas Industry, by the intellectual historian David Sessions. Drezner’s book, says Sessions, shows how today’s would-be public intellectuals are being drowned out by the rise of “thought leaders.” Thought leaders are glorified technicians and TED Talk evangelists, like Sheryl Sandberg, Thomas Friedman, and Parag Khanna, who nevertheless are treated by large audiences as emissaries from the world of ideas. Such figures would seem to fulfill Arendt’s prophecy about the danger of a culture coming to revere elite technocratic authority.

Sessions’s article, though, is not just about the superficiality and corruption of thought leaders – a seductively soft target for his New Republic readership. Sessions also hazards a positive description of what makes someone a real or authentic intellectual, and it is in these passages that his article is truly, if unwittingly, revealing. Whereas the thought leaders are guilty of flattering the whims of the superrich, Sessions claims, a group he approvingly calls the “new intellectuals on the left” have demonstrated their independence by being “willing to expose the prattle of thought leaders, to attack the rhetorical smoke screens of the liberal center, and to defend working-class voters.” Later, crediting a cluster of leftist-associated magazines (including this one) with the revival of American intellectual life, Sessions leaves little doubt as to what he considers qualifies someone to be a genuine public intellectual. To be a genuine public intellectual is to agitate for the working class, and against the “liberal center” or the superrich (also, apparently, to reflexively conflate those two terms). To be a genuine public intellectual is to have the “courage,” as he calls it, to speak truth to power.
[…]
What does it mean, then, to be an “intellectual on the left”? Although I confess the phrase strikes me as somewhat mysterious, it is not impossible to imagine a definition: an intellectual on the left, having arrived at certainty about the correct direction for society, helps formulate and disseminate arguments for moving society in that direction. But if we accept this definition as meaningful, we are compelled to agree with Strauss and Arendt that the figure of the public intellectual represents a debasement of thinking, rather than a model for it. There are plenty of reasons to commit as citizens to political parties or movements – and there may even be reasons to consider that commitment as partly the product of philosophical reasoning. But someone who speaks as a representative of a fixed ideology or group has subjugated the philosopher within themselves to the partisan.

Read the whole article here.