Category Archives: computation

Reblog> Social Justice in an Age of Datafication: Launch of the Data Justice Lab

Via The Data Justice Lab.

Social Justice in an Age of Datafication: Launch of the Data Justice Lab

The Data Justice Lab will be officially launched on Friday, 17 March 2017. Join us for the launch event at Cardiff University’s School of Journalism, Media and Cultural Studies (JOMEC) at 4pm. Three international speakers will discuss the challenges of data justice.

The event is free but requires pre-booking at https://www.eventbrite.com/e/social-justice-in-an-age-of-datafication-launching-the-data-justice-lab-tickets-31849002223

Data Justice Lab — Launch Event — Friday 17 March 4pm — Cardiff University

Our financial transactions, communications, movements, relationships, and interactions with government and corporations all increasingly generate data that are used to profile and sort groups and individuals. These processes can affect both individuals as well as entire communities that may be denied services and access to opportunities, or wrongfully targeted and exploited. In short, they impact on our ability to participate in society. The emergence of this data paradigm therefore introduces a particular set of power dynamics requiring investigation and critique.

The Data Justice Lab is a new space for research and collaboration at Cardiff University that has been established to examine the relationship between datafication and social justice. With this launch event, we ask: What does social justice mean in age of datafication? How are data-driven processes impacting on certain communities? In what way does big data change our understanding of governance and politics? And what can we do about it?

We invite you to come and participate in this important discussion. We will be joined by the following keynote speakers:

Virginia Eubanks (New America), Malavika Jayaram (Digital Asia Hub), and Steven Renderos (Center for Media Justice).

Virginia Eubanks is the author of Digital Dead End: Fighting for Social Justice in the Information Age (MIT Press, 2011) and co-editor, with Alethia Jones, of Ain’t Gonna Let Nobody Turn Me Around: Forty Years of Movement Building with Barbara Smith (SUNY Press, 2014). She is also the cofounder of Our Knowledge, Our Power (OKOP), a grassroots economic justice and welfare rights organization. Professor Eubanks is currently working on her third book, Digital Poorhouse, for St. Martin’s Press. In it, she examines how new data-driven systems regulate and discipline the poor in the United States. She is a Fellow at New America, a Washington, D.C. think tank and the recipient of a three-year research grant from the Digital Trust Foundation (with Seeta Peña Gangadharan and Joseph Turow) to explore the meaning of digital privacy and data justice in marginalized communities.

Malavika Jayaram is the Executive Director of the Digital Asia Hub in Hong Kong. Previously she was a Fellow at the Berkman Klein Center for Internet & Society at Harvard University, where she focused on privacy, identity, biometrics and data ethics. She worked at law firms in India and the UK, and she was voted one of India’s leading lawyers. She is Adjunct Faculty at Northwestern University and a Fellow with the Centre for Internet & Society, India, and she is on the Advisory Board of the Electronic Privacy Information Center (EPIC).

Steven Renderos is Organizing Director at the Center for Media Justice. With over 10 years of organizing experience Steven has been involved in campaigns to lower the cost of prison phone calls, preserving the Open Internet, and expanding community owned radio stations. Steven previously served as Project Coordinator of the Minnesotano Media Empowerment Project, an initiative focused on improving the quality and quantity of media coverage and representation of Latinos in Minnesota. He currently serves on the boards of Organizing Apprenticeship Project and La Asamblea de Derechos Civiles. Steven (aka DJ Ren) also hosts a show called Radio Pocho at a community radio station and spins at venues in NYC.

The event will be followed by a reception.

Reblog> Free Download: Digital Rights to the City

Via Mark Purcell.

2017-02-06-103004_550x790_scrot

Free Download: Digital Rights to the City

Published Today: Our Digital Rights to the City

Free to download (pdf, epub, mobi): http://meatspacepress.org/

 

‘Our Digital Rights to the City’ is a small collection of articles about digital technology, data and the city. It covers a range of topics relating to the political and economic power of technologies that are now almost inescapable within the urban environment. This includes discussions surrounding security, mapping, real estate, smartphone applications and the broader idea of a ‘right to the city’ in a post-digital world.

The collection is edited by Joe Shaw and Mark Graham and its contributing authors are Jathan Sadowski, Valentina Carraro, Bart Wissink, Desiree Fields, Kurt Iveson, Taylor Shelton, Sophia Drakopoulou and Mark Purcell.

Please follow us @meatspacepress

Join our mailing list at http://meatspacepress.org/

‘Our Digital Rights to the City’ also available free at:

* Free to download (epub, most e-readers): epub

* Free to download (pdf): pdf

* Free to download (mobi, for Kindle): mobi

* Free to read (pdf): Here

The internet is mostly bots(?)

When I am king, you will be first against the wall…

In an article for The Atlantic Adrienne LaFrance observes that a report by the security firm Imperva suggests that 51.8% of traffic online is bot traffic (by which they mean 51.8% of a sample of traffic [“16.7 billion bot and human visits collected from August 9, 2016 to November 6, 2016”] sent through their global content delivery network “Incapusla”):

Overall, bots—good and bad—are responsible for 52 percent of web traffic, according to a new report by the security firm Imperva, which issues an annual assessment of bot activity online. The 52-percent stat is significant because it represents a tip of the scales since last year’s report, which found human traffic had overtaken bot traffic for the first time since at least 2012, when Imperva began tracking bot activity online. Now, the latest survey, which is based on an analysis of nearly 17 billion website visits from across 100,000 domains, shows bots are back on top. Not only that, but harmful bots have the edge over helper bots, which were responsible for 29 percent and 23 percent of all web traffic, respectively.

LaFrance goes on to cite the marketing director of Imperva (who wants to sell you ‘security’ – he’s in the business of selling data centre services) to observe that:

“The most alarming statistic in this report is also the most persistent trend it observes,” writes Igal Zeifman, Imperva’s marketing director, in a blog post about the research. “For the past five years, every third website visitor was an attack bot.”

How do we judge this report? I find it difficult to know how representative this company’s representation of their data, although they are the purveyor of a ‘global content delivery network’. The numbers seem believable, given how long we’ve been hearing that the majority of traffic is ‘not human’ (e.g. a 2013 article in The Atlantic making a similar point and a 2012 ZDNet article saying the same thing: most web traffic is ‘not human’ and mostly malicious).

The ‘not human’ thing needs to be questioned a bit — yes, it’s not literally the result of a physical action but, then, how much of the activity on the electric grid can be said to be ‘not human’ too? I’d hazard that the majority of that so-called ‘not human’ traffic is under some kind of regular oversight and monitoring – it is, more or less, the expression of deliberative (human) agency. Indeed, to reduce the ‘human’ to what our simian digits can make happen seems ridiculous to me… We need a more expansive understanding of technical (as in technics) agency. We need more nuanced ways to come to terms with the scale and complexity of the ways we, as a species, produce and perform our experiences of everyday life – of what counts as work and the things we take for granted.

Microsoft Cognitive Services

Microsoft Cognitive Services (sounds like something from a Phillip K. Dick novel) have opened up APIs, which you can call on (req. subscription), to outsource forms of machine learning. So, if you want to identify faces in pictures or videos you can call on the “Face API“, for example. Obviously, this is all old news… but, it’s sort of interesting to maybe think about how this foregrounds the homogenisation of process – the apparent ‘power’ of these particular programmes (accessed via their APIs) may be their widespread use.

This might be of further interest when we consider things like the “Emotion API” through which (in line with many other forms of programmatic measure of the display or representation of ’emotion’ or ‘sentiment’) the programme scores a facial expression along several measures”, listed in the free example as: “anger”, “contempt”, “disgust”,” fear, “happiness”, “neutral”, “sadness”, “surprise”. For each image you’ll get a table of scores for each recognised face. Have a play – its beguiling, but of course then perhaps prompts the sorts of questions lots of people have been asking about how ‘affect’ and emotions can get codified (e.g. Massumi) and the politics and ethics of the ‘algorithms’ and such like that do these things (e.g. Beer).

I am probably late to all of this and seeing significance here because it’s relatively novel to me (not the tech itself but the ‘easy-to-use’ API structure), nevertheless it seems interesting, to me at least, that these forms of machine learning are being produced as mundane through being made abundant, as apparently straightforward tools. Maybe what I’m picking up on is that these APIs, the programmes they grant access to, are relatively transparent, whereas much of what various ‘algorithm studies’ folk look at is opaque.  Microsoft’s Cognitive Services make mundane what, to some, are very political technologies.

 

Spacing social media – seminar at Swansea (18th Jan)

I am participating in the geography seminar series at Swansea next week. I’ll be talking about some of the ideas that came out of the work we did with social media for the Contagion project.

Mostly the talk is about how ideas about space and spatial experience are important to understanding social media. This, very broadly, appears in two ways: (1) like any technology, how we use social media performs, reflects and reveals forms of spatial understanding and experience; and (2) both the methods and the subsequent analysis we do of social media, as geographers (but also that done in other disciplinary contexts), carry assumptions about space that perhaps need to be made more explicit (especially when methodological techniques carry contradictory assumptions about space to the ideas we then employ in our analysis). This comes from a far-too-long reflection on a manuscript written for publication that had some issues and in reflecting on those issues I realised that there were some interesting geographical issues to make more explicit.

Anyway, the seminar is at 2pm on the 18th of January in Glyndwr E (see 11.1 on this campus map). Hope to see a few people there…

Here’s the abstract:

Spacing Social Media

This talk will interrogate the promise as well as the critical implications of the emerging geographies of social media. In particular, the spacing of social media will be addressed in terms of the ways we might understand and theorise space and spatiality. There will be three parts to the discussion: First, the promise of social media research is addressed through an initial exploration of how those media are ineluctably entangled in changes within social, economic and political fields. Second, the translations of data in social media research are addressed through the applications and techniques involved. Third, this provides a basis for subsequent discussion of the theoretical implications of digital data methods and their spacings. I will argue that the techniques and discourses of social media methods both imply and challenge forms of spatial understanding that present challenges for geographical research.

When design fiction becomes the advert(?) Amazon Go and the refiguring of trust

I think I’ve been late to this. I saw the story about Barclaycard wanting to do “cardless” credit cards but, of course, Amazon want to vertically integrate. See the first video below. Interesting that this is incredibly similar to previous ‘envisionings’ of “the future” of retail/shopping. The first thing I thought was: ‘hang on, this is  Microsoft circa 2004’, see the second video below… and I’m sure there’s been others, not least from the likes of HP Labs… I wonder where patents lie on this stuff, cos that will be a big bargaining chip.

This is interesting though insofar as, when I was writing about the Microsoft Office Labs videos in 2008/9, the ‘future’ they figured was always positioned at some distance, it was certainly not explicitly stated that this is something you should definitely expect to happen, more a kind of ‘mood music’ to capture some sensibilities of a possible future, by representing it and hooking ideas into our general  imagination of technology and society. It certainly plays on the trope of the normalisation of heavy surveillance… what else can such a system be?

The Amazon Go video is an interesting confluence of lots of contemporary trends in attempts to refigure how we imagine digital technology. Implicit in the video is a normalisation of yet-more automation (of payment, of trust). Explicit here, as already mentioned, is that these kinds of places are not ‘private’ in any way – the system “knows” you, will know your habits, manages your money and that’s ok, in fact – it’s apparently preferable (trust, again).

Amazon seem to be fairly aggressively pushing this, taking the smooth apparently effortless aesthetics of many tech design fiction videos and using this as a means to capture the idea that such technology = Amazon. Apparently there is a “beta” shop in Seattle (where else?). No doubt someone will already be writing a journal article about this as code/space and, of course it is (and just as Kitchin & Dodge suggest about airports – I wouldn’t want to be in this shop when the servers go down), but I think the thing I find more interesting is that it seems to me that this is perhaps an overtly political manoeuvre to capture the public story about what ‘currency’ is and how payment works when we take for granted higher levels of automation, through what kinds of institution and who we can trust. This is quite a different story to the blockchain, Amazon seem to be saying “let us handle the trust issue” – a pitch usually made by a bank, or PayPal…  That might be interesting to think about (I’m sure people, like Rachel O’Dwyer, already are), not least in relation to other ways ‘trust’ is being addressed (and attempts are being made to refigure it) by other companies, institutions and groups.

All this means I’ll definitely be re-writing my lecture about money for the next iteration of my “Geographies of Technology” module next term…

The Ethics of Information Literacy

Via Michael Sacasas

Yesterday, I caught Derek Thompson of The Atlantic discussing the problem of “fake news” on NPR’s Here and Now. It was all very sensible, of course. Thompson impressed upon the audience the importance of media literacy. He urged listeners to examine the provenance of the information they encounter. He also cited an article that appeared in […]

Read the full article.

Do ‘robots’ replace (‘human’) jobs?

prof-automation

‘Robots’ (or automated systems for manufacturing and distribution) seem to be on-trend amongst social scientists[1] (following on the heels of economists [especially those trying to peddle reports predicting futures] and tech evangelists), so here’s a deceptively simple question:

Do ‘robots’ (i.e. automation) replace/destroy jobs?

Lots of coverage across various media will give you an answer that is pretty much a categorical ‘yes’. In specific industries this is true, up to a point. It is true that jobs in specific manufacturing firms that bring in ‘robots’ do become redundant.

As this AP story on Yahoo (based on bits from Rand and some academic economists) suggests: “General Motors, for instance, now employs barely a third of the 600,000 workers it had in the 1970s. Yet it churns out more cars and trucks than ever.” However, as another bit of economic research from Deloitte attests that between 1992 & 2014 total employment rose by 23%. Indeed, academic economists Georg Graetz and Guy Michaels argue in a paper that “robots had no significant effect on total hours worked, [however] there is some evidence that they reduced the hours of both low-skilled and middle-skilled workers”.

So, what of this apparent paradox? Well, apparently, the jobs being replaced by automated processes are in specific sectors and at the same time those processes have removed jobs we have had a larger growth in other sectors, such as care (at least according to Deloitte). Bodily energy, in the wielding of a hammer etc. has been replaced by an automated machine process but various ‘caring’, ‘cognitive’ and ‘creative’ forms of work have massively grown in number [2].

I’m not an economist and neither can I necessarily verify the numbers and so on, but I am interested in the ways we use this information and these ideas of creation and destruction of jobs/work to tell stories about our social/economic future [3]. We are asked to buy into various forms of technological determinism. A particularly pungent example of this is the ways in which ‘algorithms‘, and the kinds of agency of computer system that word connotes, are said to have particular (mostly problematic or sinister) effects. Following Sally Wyatt’s excellent work on technology development we can point out some of the ways this determinism is functioning:

  1. We can be selectively descriptive in our technological determinism – explain and define specific processes as having an origin and impulse from particular technologies. Here, the story about GM above, is a kind of what Wyatt terms descriptive technological determinism.
  2. We can simply assume the technology leads in a particular direction as a kind of common sense. Many of us cannot imagine life without some of our technological supports: e.g. electric power, artificial light and so on. We already have things like this, so they will surely lead to other (faster, ‘smarter’, more sophisticated etc etc) things. Some robots have existed in manufacturing for some time, so that surely means we will have more, and perhaps in other parts of our life (automated home here we come!). This is a form of what Wyatt terms normative technological determinism.
  3. The most readily understood version of determinism in relation to technology is when we say: “x” will happen because of technology developments (in particular ways). This is rather common in relation to ‘robots’ – for instance Deloitte in another report on “the state of the State” suggest around 865k jobs will be lost to automation by 2030. This is what Wyatt calls justifactory technological determinism.
  4. We have methods for making claims about the world and these make particular kinds of normative and epistemological assumptions about technology development, innovation and use. In this way there are forms of what Wyatt calls methodological determinism (and maybe that’s part of what I’m doing here too).

Of course, this isn’t only about the rationale of future orientation – it is also about the kinds of imagination that rationale both draws upon and produces. If we tell stories about the apparent destruction of working time then we can also tell stories about how that may lead to an increase in leisure time – as JM Keynes famously argued in his Economic Possibilities for Our Grandchildren

One of the more interesting arguments in the imagining of an automated future is what a given society should do with the apparent wealth of time freed up by the robots. The ‘universal basic income‘ is one such story – that everyone should be granted a share of the wealth of the productivity gains through the granting of a universal stipend that covers the basic cost of living. Here we enter the realm of (depending on your standpoint) attempting to think beyond capitalism – in the vein of ‘accelerationism‘ and so on (and, of course, there are critiques of this).

Like many forms of discussion about the development of technology, the gap in thinking about automation, I think, is  the easy slip into affirming the stories we are told about technological development without questioning the assumptions and rationale of those stories and without attempting to tell our own. For example, while I am a great admirer of Bernard Stiegler’s work I think he falls into this trap when building parts of his narrative about an economy of contribution. It seems to me that there is perhaps some thinking to be done on what I’ve begun to call (for my own thinking purposes) the ‘automative imaginary’, the work it does and the ways it might be put into question.

We, geographers and social scientists, can and should collaborate in the study and development of automation, algorithms, robots, and so forth. However, we should retain our critical stance on all generalised attempts to declare that these technologies and systems have done this or that to young people, leisure time, jobs, and so on. This doesn’t preclude seeing how automating systems can be used to increase what gets measured as productivity, for example as the GM case discussed above does, or to determine behavior, for example as Natasha Schüll does in reference to Las Vegas slot machines. However, such instances usually depend upon a very specific context that we need to unpick. We should speak up for a critical and playful perspective to counter the rise of the apparent certainty that a particular version of modeling will ‘predict’ technological futures. We can and should be provocative [4].

Notes

  1. See for example the articles and upcoming conference sessions on robots in human geography: Social Geography II: Robotsrobotic futures, digital\\human\\labour.
  2. It is interesting to note that for a while now, there has been a compelling counter narrative to the ‘robots are taking/will take our jobs’ story, which is a rather more familiar one: globalisation and post-Fordism. This story argue that rather than ‘re-shoring’ manufacturing to automated hi-tech plants, manufacturers are still paying for cheap labour ‘off shore’ and instituting ever leaner practices through ‘supplemental’ (to traditional human labour) robots. Likewise, Amazon, the apparent champion of robotising their processes, have a growing workforce, and have had for several years.
  3. It is depressingly predictable that these claims seem to rely upon the systematic dismissal or forgetting of work more-often-than-not identified as in some way female (such as care work), which has actually significantly grown, when focussing upon traditionally normative male work, like manual labour. Likewise, there is no accounting for the quality of work being generated by automation, which may be badly paid,  more precarious and less regular.
  4. For example: Counting SheepThe Museum of Contemporary CommoditiesUninvited Guests

A quantitative ideology? James Bridle on an algorithmic imaginary

The excellent artist James Bridle has written something for the New Humanist, which is published on their website, entitled “What’s wrong with big data?” Perhaps he’s been reading Rob Kitchin’s The Data Revolution? 🙂 Anyway, it sort of chimes with my previous post on data debates and with the sense in which the problems Bridle so incisively lays out for the readers of his article are not necessarily practical problems but rather are epistemological problems – they pertain to the ways in which we are asked to make sense of the world…

This belief in the power of data, of technology untrammelled by petty human worldviews, is the practical cousin of more metaphysical assertions. A belief in the unquestionability of data leads directly to a belief in the truth of data-derived assertions. And if data contains truth, then it will, without moral intervention, produce better outcomes. Speaking at Google’s private London Zeitgeist conference in 2013, Eric Schmidt, Google Chairman, asserted that “if they had had cellphones in Rwanda in 1994, the genocide would not have happened.” Schmidt’s claim was that technological visibility – the rendering of events and actions legible to everyone – would change the character of those actions. Not only is this statement historically inaccurate (there was plenty of evidence available of what was occurring during the genocide from UN officials, US satellite photographs and other sources), it’s also demonstrably untrue. Analysis of unrest in Kenya in 2007, when over 1,000 people were killed in ethnic conflicts, showed that mobile phones not only spread but accelerated the violence. But you don’t need to look to such extreme examples to see how a belief in technological determinism underlies much of our thinking and reasoning about the world.

Quantified thinking is the dominant ideology of contemporary life: not just in scientific and computational domains but in government policy, social relations and individual identity. It exists equally in qualified research and subconscious instinct, in the calculations of economic austerity and the determinacy of social media. It is the critical balance on which we have placed our ability to act in the world, while critically mistaking the basis for such actions. “More information” does not produce “more truth”, it endangers it.

You can read the whole article on the New Humanist website.

Video> Imagining automation – public talk

I gave a talk for the SW Futurists meet up group this week and they’ve recorded the talks. There are two speakers: Lucas Godfrey (Edinburgh) talked about the challenges of creating models of phenomena in the world so that you can automate things. I talked about the politics of the kinds of stories we tell about automation and how they orient our understandings of how automation might function. Both are included in the video but I’ve skipped to the start of my talk below.

Feel free to leave comments, ask questions etc. using the “Comments” function below… this presentation is sort of based on two bits of work about automation that have been developing as academic presentations. The first is about how we tell stories about work in relation to automation, and way we use ‘algorithm’ as a proxy for that idea. The second is about how we imagine what apparently automated/automatic technologies are doing and what they can do. I think both of these things constitute what I’ve come to call an “automative imaginary”… I started out calling this “algorithmic—“, but I don’t think that ‘s what I have ever really meant. I also don’t think, another fashionable term, “robots” is a particularly helpful way to frame the ideas I’m interested in. Anyway, I’m hoping to develop this into a journal article.