CFP: Workshop on Trustworthy Algorithmic Decision-Making

Not sure where I found this, but it may be of interest…

Workshop on Trustworthy Algorithmic Decision-Making
Call for Whitepapers

We seek participants for a National Science Foundation sponsored workshop on December 4-5, 2017 to work together to better understand algorithms that are currently being used to make decisions for and about people, and how those algorithms and decisions can be made more trustworthy. We invite interested scholars to submit whitepapers of no more than 2 pages (excluding references); attendees will be invited based on whitepaper submissions. Meals and travel expenses will be provided.

Online algorithms, often based on data-driven machine-learning approaches, are increasingly being used to make decisions for and about people in society. One very prominent example is the Facebook News Feed algorithm that ranks posts and stories for each person, and effectively prioritizes what news and information that person sees. Police are using “predictive policing” algorithms to choose where to patrol, and courts are using algorithms that predict the likelihood of repeat offending in sentencing. Face recognition algorithms are being implemented in airports in lieu of ID checks. Both Uber and Amazon use algorithms to set and adjust prices. Waymo/Google’s self-driving cars are using Google maps not just as a suggestion, but to actually make route choices.

As these algorithms become more integrated into people’s lives, they have the potential to have increasingly large impacts. However, if these algorithms cannot be trusted to perform fairly and without undue influences, then there may be some very bad unintentional effects. For example, some computer vision algorithms have mis-labeled African Americans as “gorillas”, and some likelihood of repeat offending algorithms have been shown to be racially biased. Many organizations employ “search engine optimization” techniques to alter the outcomes of search algorithms, and “social media optimization” to improve the ranking of their content on social media.

Researching and improving the trustworthiness of algorithmic decision-making will require a diverse set of skills and approaches. We look to involve participants from multiple sectors (academia, industry, government, popular scholarship) and from multiple intellectual and methodological approaches (computational, quantitative, qualitative, legal, social, critical, ethical, humanistic).

Whitepapers

To help get the conversation started and to get new ideas into the workshop, we solicit whitepapers of no more than two pages in length that describe an important aspect of trustworthy algorithmic decision-making. These whitepapers can motivate specific questions that need more research; they can describe an approach to part of the problem that is particularly interesting or likely to help make progress; or they can describe a case study of a specific instance in the world of algorithmic decision-making and the issues or challenges that case brings up.

Some questions that these whitepapers can address include (but are not limited to):

  • What does it mean for an algorithm to be trustworthy?
  • What outcomes, goals, or metrics should be applied to algorithms and algorithm-made decisions (beyond classic machine-learning accuracy metrics)?
  • What does it mean for an algorithm to be fair? Are there multiple perspectives on this?
  • What threat models are appropriate for studying algorithms? For algorithm-made decisions?
  • What are ways we can study data-driven algorithms when researchers don’t always have access to the algorithms or to the data, and when the data is constantly changing?
  • Should algorithms that make recommendations be held to different standards than algorithms that make decisions? Should filtering algorithms have different standards than ranking or prioritization algorithms?
  • When systems use algorithms to make decisions, are there ways to institute checks and balances on those decisions? Should we automate those?
  • Does transparency really achieve trustworthiness? What are alternative approaches to trusting algorithms and algorithm-made decisions?

Please submit white papers along with a CV or current webpage by October 9, 2017 via email to trustworthy-algorithms@bitlab.cas.msu.edu. We plan to post whitepapers publicly on the workshop website (with authors’ permission) to facilitate conversation ahead of, at, and after the workshop. More information about the workshop can be found at http://trustworthy-algorithms.org.

We have limited funding for PhD students interested in these topics to attend the workshop. Interested students should also submit a whitepaper with a brief description of their research interests and thoughts on these topics, and indicate in their email that they are PhD students.

CFP: Theorising digital space

glitches image of a 1990s NASA VR experience

In another of a series of what feels dangerously like back-to-the-1990s moments as some geographers attempt to wrangle ‘digital geographies’ as a brand, which I find problematic, I saw the below CFP for the AAG.

I am sorry if it seems like I’m picking on this one CFP, I have no doubt that it was written with the best of intentions and if I were able to attend the conference I would apply to speak and attend it. I hope others will too. In terms of this post it’s simply the latest in a line of conference sessions that I think unfortunately seem to miss, or even elide, long-standing debates in geography about mediation.

Maybe my reaction is in part because I cannot attend (I’m only human, I’d quite like to go to New Orleans!), but it is also in part because I am honestly shocked at the inability for debates within what is after all a fairly small discipline to move forward in terms of thinking about ‘space’ and mediation. This stands out because it follows from ‘digital’ sessions at the AAG last year that made similar sorts of omissions.

In the late 1990s a whole host of people theorised place/space in relation to what we’re now calling ‘the digital’. Quite a few were geographers. There exists a significant and, sometimes, sophisticated literature that lays out these debates, ranging from landmark journal articles to edited books and monographs that all offer different views on how to understand mediation spatially (some of this work features in a bibliography I made ages ago).

Ironically, perhaps, all of this largely accessible ‘online’, you only need search for relevant key terms, follow citation chains using repositories – much of it is there, many of the authors are accessible ‘digitally’ too. And yet, periodically, we see what is in effect the same call for papers asking similar questions: is there a ‘physical’/’digital’ binary [no], what might it do, how do we research the ‘digital’, ‘virtual’ etc. etc.

We, all kinds of geographers, are not only now beginning to look at digital geographies, it’s been going on for some time and it would be great if that were acknowledged in the way that Prof. Dorothea Kleine did with rare clarity in her introduction to the RGS Digital Geographies Working Group symposium earlier this year (skip to 03:12 in this video).

So, I really hope that some of those authors of books like “Virtual Geographies“, to take just one example (there are loads more – I’m not seeking to be canonical!), might consider re-engaging with these discussions to lend some of perspective that they have helped accrue over the last 20+ years and speak at, or at least attend, sessions like this.

I hope that others will consider speaking in this session, to engage productively and to open out debate, rather than attempt to limit it in a kind of clique-y brand.

Theorizing Place and Space in Digital Geography: The Human Geography of the Digital Realm

In 1994 Doreen Massey released Space, Place and Gender, bringing together in a single volume her thoughts on many of the key discussions in geography in the 1980s and early 1990s. Of note was the chapter, A global sense of place, and the discussion on what constitutes a place. Massey argues that places, just like people, have multiple identities, and that multiple identities can be placed on the same space, creating multiple places inside space. Places can be created by different people and communities, and it is through social practice, particularly social interaction, that place is made. Throughout this book, Massey also argues that places are processional, they are not frozen moments, and that they are not clearly defined through borders. As more and more human exchanges in the ‘physical realm’ move to, or at least involve in some way, the ‘digital realm’, how should we understand the sites of the social that happen to be in the digital? What does a human geography, place orientated understanding of the digital sites of social interaction tell us about geography? Both that in the digital and physical world.

Massey also notes that ‘communities can exist without being in the same place – from networks of friends with like interests, to major religious, ethnic or political communities’. The ever-evolving mobile technologies, the widening infrastructures that support them and the increasing access to smartphones, thanks in part to new smart phone makers in China releasing affordable yet powerful smartphones around the world, has made access to the digital realm, both fixed in place (through computers) and, as well as more often, through mobile technologies a possibility for an increasing number of people worldwide. How do impoverished or excluded groups use smart technologies to (re)produce place or a sense of place in ways that include links to the digital realm? From rural farming communities to refugees fleeing Syria and many more groups, in what ways does the digital realm afford spatial and place making opportunities to those lacking in place or spatial security?

How are we to understand the digital geographies of platforms and the spaces that they give us access to? Do platforms themselves even have geographies? Recently geographers such as Mark Graham have begun a mapping of the dark net, but how should we understand the geographies of other digital spaces, from instant messaging platforms to social media or video streaming websites? What is visible and what is obscured? And what can we learn about traditional topics in social science, such as power and inequality, when we begin to look at digital geographies?

In this paper session for 5 papers we are looking for papers exploring:

  • Theories of place and space in the digital realm, including those that explore the relationship between the digital and physical realms
  • Research on the role of digital realm in (re)producing physical places, spaces and communities, or creating new places, spaces and communities, both in the digital realm and outside of it.
  • Papers considering relationship between physical and digital realms and accounts of co-production within them.
  • The role of digital technologies in providing a sense of space and place, spatial security and secure spaces and places to those lacking in these things.
  • Research exploring the geographies of digital platforms, websites, games or applications, particularly qualitative accounts that examine the physical and digital geographies of platforms, websites, games or applications.
  • Research examining issues of power, inequality, visibility and distance inside of the digital realm.

“Invisible Images: Ethics of Autonomous Vision Systems” Trevor Paglen at “AI Now” (video)

racist facial recognition

Via Data & Society / AI Now.

Trevor Paglen on ‘autonomous hypernormal mega-meta-realism’ (probably a nod to Curtis there). An entertaining brief talk about ‘AI’ visual recognition systems and their aesthetics.

(I don’t normally hold with laughing at your own gags but Paglen says some interesting things here – expanded upon in this piece (‘Invisible Images: Your pictures are looking at you’) and this artwork – Sight Machines [see below]).

‘Automated’ sweated labour

Charlie Chaplin in Modern Times

This piece by Sonia Sodha (Worry less about robots and more about sweatshops) in the Grauniad, which accompanies an episode of the Radio 4 programme Analysis (Who Speaks for the Workers?), is well worth checking out. It makes a case that seems to be increasing in consensus – that ‘automation’ in particular parts of industry will not mean ‘robots’ but pushing workers to become more ‘robotic’. This is an interesting foil to the ‘automated luxury communism’ schtick and the wider imaginings of automation. If you stop to think about wider and longer term trends in labour practices, it also feels depressingly possible…

This is the underbelly of our labour market: illegal exploitation, plain and simple. But there are other legal means employers can use to sweat their labour. In a sector such as logistics, smart technology is not being used to replace workers altogether, but to make them increasingly resemble robots. Parcel delivery and warehouse workers find themselves directed along exact routes in the name of efficiency. Wrist-based devices allow bosses to track their every move, right down to how long they take for lavatory breaks and the speed with which they move a particular piece of stock in a warehouse or from the delivery van to someone’s front door.

This hints at a chilling future: not one where robots have replaced us altogether, but where algorithms have completely eroded worker autonomy, undermining the dignity of work and the sense of pride that people can take in a job well done.

This fits well with complementary arguments about ‘heteromation‘ and other more nuanced understandings of what’s followed or extended what we used to call ‘post-Fordism’…

The ambiguity of sharing images

Two tweets, about 12 hours apart. It seems to me, in an entirely unsystematic, morning coffee kind of analysis, that the two posts demonstrate something of the ambiguity of image sharing practices and circulation of images (on Twitter)… at least in my experience of one platform, Twitter.

The “Grease” tweet, through humour, attempts to comment on contemporary geopolitics. The veracity (or not) of the image possibly doesn’t matter.

The ‘fact check’ nature of the later tweet directly addresses the (lack of) authenticity of the image itself. Showing the ‘original’.

So there’s something about ‘fakeness’ of media, the politics of circulation, something about simulacrum and the convening of publics and maybe something about the ambivalence of image making and sharing practices that falls within the “meme” discourse.

In discussing her work as part of the RGS-IBG ‘digital geographies’ working group symposium about 10 days ago, Gillian Rose discussed the ways in which we may or may not malign the ‘everydayness’ of photographic or image practices and why it remains necessary to study and engage with the everyday practices of meaning-making (there’s a course for this, co-convened by Gillian).

This perhaps prompts some questions about the above tweets. For example, what is it we can or might want to say about the images themselves, their circulation and how they fit into wider, everyday, meaning-making practices? The doctored image fits into a particular aesthetic of ‘memes’ and is contextualised in text in the post, which also goes for the ‘fact check’ tweet too, in a way. How do we interpret the (likely) different intentions behind the thousands of retweets of the above? How might we capture the ‘polymedia’ (following Miller et al.) lives of such images? (Is that even possible?) How might we interrogate what I’m suggesting is the ambivalence of ‘sharing’? I suggest this cannot be served by the mass analysis of image corpora (following Manovich), nor is it really reducible to the ‘attention economy’ – it’s not only about the labour of sharing or the advertising it enables. Instead, I guess what I’m fumbling towards is asking for the analysis of the circulation practices for (copies of) a single image within a network (which may or may not span different platforms).

The danger, I increasingly feel, is that we all-too-quickly resort to super-imposing onto these case studies our ontotheological or ideological meta-narratives – so, it may ‘really’ be about affect, neoliberalism and so on… except of course, it isn’t only about those things, and while they may be important analytical frames they may not address the questions we’re interested in, or should be, posing. I’m not saying such framings are wrong, I’m saying they’re not the only frames of analysis.

All of this leads me to confess that I am beginning to wonder if our ‘digital methods‘ (following Rogers and others) are really up to this sort of task… As yet I’ve not read anything to convince me otherwise, which actually sort of surprises me. The closest I’ve got is the media ethnography work of the outstanding Why We Post project – but, of course, that isn’t particularly a “digital” method, which maybe says something (maybe about my own bias). I’d be interested to know if anyone has any thoughts.

A further thing I wonder is whether or not these sorts of practices will remain stable enough for long enough to warrant the ‘slower’, considered, kinds of research that might enable us to begin to get at answers to my all-too-general, or misplaced, questions above. I remain haunted by undergraduate and masters research into now-defunct platforms and styles of media use… friendster and myspace anyone?

Some relevant links:

Reblog> CFP: Affect, Politics, Social Media

Via Tony Sampson.

This may be of interest to followers of this blog…

Call for papers: Affect, Politics, Social Media

In prolongation of Affect and Social Media #3 Conjunctions: Transdisciplinary Journal of Cultural Participation welcomes proposals that interpret and explore affective and emotional encounters with social media and the ways in which the interfaces of social media in return modulate affectivity. Fake news have come to be a highly debated framework to understand the consequences of the entanglements of affect, politics and social media. But theories on fake news often fail to grasp the consequences and significance of social media content that are not necessarily fake, but are merely intended to affectively intensify certain political positions.

It is in this context that it becomes crucial to understand the role of affect in relation to the ways in which social media interfaces function, how affective relations are altered on social media and not least how politics is transformed in the attempt to capitalize on the affective relations and intensities potentially fostered on social media.

This special issue invites empirical, theoretical and practical contributions that focus on recent (political) media events – such as Brexit, the US and French elections and the refugee crisis – and how these unfolded on, and are informed by, social media. Proposals might, for instance, address how the Trump campaign allows us to develop a new understanding of the relationship between social media and politics. As such the issue seeks papers that develop new understandings of affective politics and take into account shared experiences, affective intensities, emotional engagements and new entanglements with social media.

For more information, including author guidelines, please visit http://www.conjunctions-tjcp.com/

Deadline 28 November 2017

Articles must be submitted to conjunctions@cc.au.dk

Responsive media

personal media

It’s interesting to compare competing interpretations of the same ‘vision’ for our near-future everyday media experience. They more or less circle around a series of themes that have been a staple of science fiction for some time: media are in the everyday environment and they respond to us, to varying degrees personally.

On the one-hand some tech enthusiasts/developers present ideas such as “responsive media“, a vision put forward by a former head of ubiquitous computing at Xerox PARC, Bo Begole. On the other hand, sceptics have, for quite some time, presented us with dystopian and/or ‘critical’ reflections on the kinds of ethical and political(economic) ills such ideas might mete out upon us (more often than not from a broadly Marxian perspective), recently expressed in Adam Greenfield’s op-ed for the Graun (publicising his new book “Radical Technologies”).

It’s not like there aren’t plenty of start-ups, and bigger companies (Begole now works for Huawei), trying to more-or-less make the things that science fiction books and films (often derived in some way from Phillip K Dick’s oeuvre) present as insidious and nightmarish. Here I can unfairly pick upon two quick examples: the Channel 4 “world’s first personalised advert” (see the video above) and OfferMoments:

While it may be true that many new inventors are subconsciously inspired by the science fiction of their childhoods, this form of inspiration is hardly seen in the world of outdoor media. Not so for OfferMoments – a company offering facial recognition-powered, programmatically-sold billboard tech directly inspired by the 2002 thriller, Minority Report.

I’ve discussed this in probably too-prosaic terms as a ‘politics of anticipation’, but this, by Audrey Watters (originally about EdTech), seems pretty incisive to me:

if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

I have come to think this has produced a kind of orientation towards particular ideas and ideals around automation, which I’ve variously been discussing (in the brief moments in which I manage to do research) as an ‘algorithmic’ and more recently an ‘automativeimagination (in the manner in which we, geographers, talk about a ‘geographical imagination’).

CFP> “VIRAL/GLOBAL Popular Culture and Social Media: An International Perspective” 13th Sept 17 CAMRI

Via Tony Sampson.

“VIRAL/GLOBAL Popular Culture and Social Media: An International Perspective” The University of Westminster Communication and Media Research Institute (CAMRI), Sept 13th 2017

Date:
13 September 2017
Time: 9:00am to 7:00pm
Location: 309 Regent Street Regent Campus, 309 Regent Street, London W1B 2HW – View map

Gone-Viral-event-main-photo

Conference organised by the Communication and Media Research Institute (CAMRI)

Keynote Panel

  • Nancy Baym 
  • Emily Keightley
  • Dave Morley (TBC)
  • Tony D Sampson
  • Paddy Scannell

This interdisciplinary conference aims to examine how and why everyday popular culture is produced and consumed on digital platforms. There is increasing interest in studying and discussing the linkages between popular cultural and social media, yet there exist important gaps when comparing such cultural phenomena and modes of consumption in a global, non-west-centric context. The conference addresses a significant gap in theoretical and empirical work on social media by focusing on the politics of digital cultures from below and in the context of everyday life. To use Raymond Williams’s phrase, we seek to rethink digital viral cultures as ‘a whole way of life’; how ‘ordinary’, everyday digital acts can amount to forms of ‘politicity’ that can redefine experience and what is possible.

The conference will examine how social media users engage with cultural products in digital platforms. We will also be assessing how the relationship between social media and popular cultural phenomena generate different meanings and experiences.

The conference engages with the following key questions:

  • How do online users in different global contexts engage with viral/popular cultures?
  • How can the comparative analysis of different global contexts help us contribute to theorising emergent viral cultures in the age of social media?
  • How do viral digital cultures redefine our experience of self and the world?

We welcome papers from scholars that will engage critically with particular aspects of online popular cultures. Themes may include, but are not limited to, the following:

  • Analysing viral media texts: method and theory
  • Theorising virality: new/old concepts
  • Rethinking popular culture in the age of social media
  • Social media, politicity and the viral
  • The political economy of viral cultures
  • Memes, appropriation, collage, virality and trash aesthetics
  • Making/doing/being/consuming viral texts
  • Hybrid strategies of anti-politics in digital media
  • Viral news/Fake news
  • Non-mainstream music, protest, and political discussion
  • Capitalism and viral marketing

PROGRAMME AND REGISTRATION

This one-day conference, taking place on Wednesday, 13th of September 2017, will consist of a keynote panel and panel sessions. The fee for registration for all participants, including presenters, will be £40, with a concessionary rate of £15 for students, to cover all conference documentation, refreshments and administration costs.

DEADLINE FOR ABSTRACTS

The deadline for abstracts is Monday 10 July 2017. Successful applicants will be notified by Monday 17 July of 2017. Abstracts should be 250 words. They must include the presenter’s name, affiliation, email and postal address, together with the title of the paper and a 150-word biographical note on the presenter. Please send all these items together in a single Word file, not as pdf, and entitle the file and message with ‘CAMRI 2017’ followed by your surname. The file should be sent by email to Events Coordinator Karen Foster at har-events@westminster.ac.uk

Original: https://www.westminster.ac.uk/call-for-papers-viral-global-popular-cultures-and-social-media-an-international-perspective

How and why is children’s digital data being harvested?

Nice post by Huw Davies, which is worth a quick read (its fairly short)…

We need to ask what would data capture and management look like if it is guided by a children’s framework such as this one developed here by Sonia Livingstone and endorsed by the Children’s Commissioner here. Perhaps only companies that complied with strong security and anonymisation procedures would be licenced to trade in UK? Given the financial drivers at work, an ideal solution would possibly make better regulation a commerical incentive. We will be exploring these and other similar questions that emerge over the coming months.

Responsibility gaps and autonomy – AI, autonomous weapons and cars

Over on the excellent Algocracy blog/podcast John Danaher interviews Hin-Yan Liu, a law scholar in Copenhagen who’s done some work on responsibility and autonomy in relation to autonomous weapons systems and driverless cars. The discussion is really interesting, thinking through various ways on understanding responsibility in relation to autonomy, expanding out ideas about what an ‘autonomous weapons system’ might be (such as – is a private military contractor an AWS?) and thinking through the ethical, moral and political issues of different ways responsibility gets understood. I encourage you to have a listen.

This stems from work by Liu that is published in two papers:

Here’s Liu’s faculty webpage.