(More) Gendered imaginings of automata

My Cayla Doll

A few more bits on how automation gets gendered in particular kinds of contexts and settings. In particular, the identification of ‘home’ or certain sorts of intimacy with certain kinds of domestic or caring work that then gets gendered is something that has been increasingly discussed.

Two PhD researchers I am lucky enough to be working with, Paula Crutchlow (Exeter) and Kate Byron (Bristol), have approached some of these issues from different directions. Paula has had to wrangle with this in a number of ways in relation to the Museum of Contemporary Commodities but it was most visible in the shape of Mikayla, the hacked ‘My Friend Cayla Doll’. Kate is doing some deep dives on the sorts of assumptions that are embedded into the doing of AI/machine learning through the practices of designing, programming and so on. They are not, of course, alone. Excellent work by folks like Kate Crawford, Kate Devlin and Gina Neff (below) inform all of our conversations and work.

Here’s a collection of things that may provoke thought… I welcome any further suggestions or comments 🙂

Alexa, does AI have gender?


Alexa is female. Why? As children and adults enthusiastically shout instructions, questions and demands at Alexa, what messages are being reinforced? Professor Neff wonders if this is how we would secretly like to treat women: ‘We are inadvertently reproducing stereotypical behaviour that we wouldn’t want to see,’ she says.

Prof Gina Neff in conversation with Ruth Abrahams, OII.

Predatory Data: Gender Bias in Artificial Intelligence

it has been reported that female-sounding assistive chatbots regularly receive sexually charged messages. It was recently cited that five percent of all interactions with Robin Labs, whose bot platform helps commercial drivers with routes and logistics, is sexually explicit. The fact that the earliest female chatbots were designed to respond to these suggestions
deferentially or with sass was problematic as it normalised sexual harassment.

Vidisha Mishra and Madhulika Srikumar – Predatory Data: Gender Bias in Artificial Intelligence

The Gender of Artificial Intelligence

Chart showing that the gender of artificial intelligence (AI) is not neutral
The gendering, or not, of chatbots, digital assistants and AI movie characters – Tyler Schnoebelen

“Consistently representing digital assistants as female
hard-codes a connection between a woman’s voice and subservience.”

Stop Giving Digital Assistants Female Voices – Jessica Nordell, The New Republic

New journal article> A very public cull: the anatomy of an online issue public

Twitter

I am pleased to share that a paper that Rebecca Sandover, Steve Hinchliffe and I have had under review for some time has been accepted for publication. The paper comes from our project “Contagion”, which amongst other things examined the ways issue publics form and spread around public controversies – in this case the English badger cull of 2013/14. The research this article presents comes from mixed methods social media research, focused on Twitter. The methods and conversation have, of course, moved on a little in the last two years but I think the paper makes a contribution to how geographers in particular might think about doing social media-based research. I guess this, as a result, also fits into the recent (re)growth of ‘digital geographies’ too.

The article is titled “A very public cull: the anatomy of an online issue public” and will be published in Geoforum in the not-too-distant future. Feel free to get in touch for a pre-print version.

Abstract:

Geographers and other social scientists have for some time been interested in how scientific and environmental controversies emerge and become public or collective issues. Social media are now key platforms through which these issues are publicly raised and through which groups or publics can organise themselves. As media that generate data and traces of networking activity, these platforms also provide an opportunity for scholars to study the character and constitution of those groupings. In this paper we lay out a method for studying these ‘issue publics’: emergent groupings involved in publicising an issue. We focus on the controversy surrounding the state-sanctioned cull of wild badgers in England as a contested means of disease management in cattle. We analyse two overlapping groupings to demonstrate how online issue publics function in a variety of ways – from the ‘echo chambers’ of online sharing of information, to the marshalling of agreements on strategies for action, to more dialogic patterns of debate. We demonstrate the ways in which digital media platforms are themselves performative in the formation of issue publics and that, while this creates issues, we should not retreat into debates around the ‘proper object’ of research but rather engage with the productive complications of mapping social media data into knowledge (Whatmore 2009). In turn, we argue that online issue publics are not homogeneous and that the lines of heterogeneity are neither simple, or to be expected, and merit study as a means to understand the suite of processes and novel contexts involved in the emergence of a public. 

Why WIRED’s future never arrives – David Karpf

Promotional image for the 1995 film Hackers

Quite a good piece on the Wired website reflecting upon 25 years of predictions about the future in the pages of that magazine (though I’m not sure the exonerating final paragraph rings true). Worth a read…

Looking back at WIRED’s early visions of the digital future, the mistake that seems most glaring is the magazine’s confidence that technology and the economics of abundance would erase social and economic inequality. Both Web 1.0 and Web 2.0 imagined a future that upended traditional economics. We were all going to be millionaires, all going to be creators, all going to be collaborators. But the bright future of abundance has, time and again, been waylaid by the present realities of earnings reports, venture investments, and shareholder capitalism. On its way to the many, the new wealth has consistently been diverted up to the few.

By now, the digital revolution isn’t just the future; it has a history. Digital technology runs our economy. It organizes our daily lives. It mediates how we learn information, tell each other stories, and connect with our neighbors. It’s how we control and harass and encourage one another. It’s a tool of both surveillance and resistance. You can almost never be entirely offline anymore. The internet is setting the agenda for the world around us.

The digital revolution’s track record suggests that its arc doesn’t always bend toward abundance—or in a straight line at all. It flits about, responding to the gravitational forces of hype bubbles and monopoly power, warped by the resilience of old institutions and the fragility of new ones. Today’s WIRED seems to have learned these lessons.

25 years of wired predictions: why the future never arrives – David Karpf

Some more A.I. links

Twiki the robot from Buck Rogers

This post contains some tabs I have had open in my browser for a while that I’m pasting here both to save them in a place I may remember to look and to share them with others that might find them of interest. I’m afraid I don’t have time, at present, to offer any cogent commentary or analysis – just simply to share…

Untold AI - Christopher NoesselUntold A.I. – “What stories are we not telling ourselves about A.I?”, Christopher Noessel: An interesting attempt to look at popular, sci-fi stories of A.I. and compare them to contemporary A.I. research manifestos and look at where we might not be telling ourselves stories about the things people are actually trying to do.

 

The ethics of crashes with self?driving cars: A roadmap – Sven Nyholm: A two-part series of papers [one and two ($$) / one and two (open)] published in Philosophy Compass concerning how to think through the ethical issues associated with self-driving cars. Nyholm recently talked about this with John Danaher on his podcast.

Cognitive Bias CodexWEF on the Toronto Declaration and the “cognitive bias codex”: A post on the World Economic Forum’s website about “The Toronto Declaration on Machine Learning” on guiding principles for protecting human rights in relation to automated systems. As part of the post they link to a nice diagram about cognitive bias – the ‘cognitive bias codex‘.

RSA public engagement with AI reportRSA report on public engagement with AI: “Our new report, launched today, argues that the public needs to be engaged early and more deeply in the use of AI if it is to be ethical. One reason why is because there is a real risk that if people feel like decisions about how technology is used are increasingly beyond their control, they may resist innovation, even if this means they could lose out on benefits.”

artificial unintelligence - broussardArtificial Unintelligence, Meredith Broussard: “In Artificial Unintelligence, Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners—that we have stopped demanding that our technology actually work.”

Data-driven discrimination: a new challenge for civil society: A blogpost on the LSE ‘Impact of Soc. Sci.’ blog: “Having recently published a report on automated discrimination in data-driven systems, J?drzej Niklas and Seeta Peña Gangadharan explain how algorithms discriminate, why this raises concerns for civil society organisations across Europe, and what resources and support are needed by digital rights advocates and anti-discrimination groups in order to combat this problem.”

‘AI and the future of work’ – talk by Phoebe Moore: Interesting talk transcript with links to videos. Snippet: “Human resource and management practices involving AI have introduced the use of big data to make judgements to eliminate the supposed “people problem”. However, the ethical and moral questions this raises must be addressed, where the possibilities for discrimination and labour market exclusion are real. People’s autonomy must not be forgotten.”

Government responds to report by Lords Select Committee on Artificial Intelligence: “The Select Committee on Artificial Intelligence receives the Government response to the report: AI in the UK: Ready, willing and able?, published on 16 April 2018.”

How a Pioneer of Machine Learning Became One of Its Sharpest Critics, Kevin Hartnett – The Atlantic: “Judea Pearl helped artificial intelligence gain a strong grasp on probability, but laments that it still can’t compute cause and effect.”

Job> Research Fellow, Centre for Postdigital Cultures

A statue of three men hammering

Via Gary Hall. Looks like a good opportunity for someone…

Research Fellow, Centre for Postdigital Cultures

Centre for Postdigital Cultures (CPC) is a new Faculty Research Centre at Coventry University. We are looking to recruit a developer with knowledge of new technologies such as Xtended Reality and those associated with open access publishing. Our Research Fellow/developer will be equally happy with building platforms as they are with contributing to funding bids and academic papers. The person we employ will be involved in all technical aspects of the Research Centre from uploading website content to creating virtual reality scenarios and will have the people skills to work with those at all levels of technical understanding.

You will be qualified in object oriented programming languages such as c#, c++, Java, Java scripts and swift with proven experience of development in modern web development languages like PHP and HTML5. Plus you will possess some experience in 2D/3D modelling using software such as 3DS max and Photoshop and Maya for digital asset creation and Unity 3D or Unreal game engines for developing game-based and immersive experiences.

This is the chance for you to join the CPC team in the early stages of growth and to play a significant role in the development of impactful research activity within the emerging field of postdigital cultures. Led by Professor Gary Hall, the Centre explores how innovations in postdigital cultures can enable 21st century society respond to the challenges it faces at a global, national and local level.

  • how we receive, consume and process information
  • how we learn, work, and travel
  • how we engage and regenerate our communities and cities

What Do We Mean By Postdigital Cultures?

“The digital” can no longer be understood as a separate domain of media and culture. If we actually examine the digital – rather than taking it for granted we already know what it means – we see that today digital information processing is present in every aspect of our lives. This includes our global communication, entertainment, education, energy, banking, health, transport, manufacturing, food, and water-supply systems. Attention therefore needs to turn from “the digital”, to the various overlapping processes and infrastructures that shape and organise the digital, and that the digital helps to shape and organise in turn.

The CPC investigates such enmeshed digital models of culture, society, and the creative economy for the 21st century “post digital” world.

Research Areas covered by the centre include:

  • Post-capitalist Economies
  • Post-humanities
  • Affirmative Disruption and Open Media
  • Immersive and Playful Cultures, Creative Archiving and International Heritage
  • Digital Arts and Humanities
  • The 21st Century University and Art School

Members of the CPC include Janneke Adema, Adrienne Evans, Valeria Graziano, Kaja Marczewska, Marcel Mars and Miriam de Rosa.

The role of the Research Fellow Developer is to plan, develop and manage collaborative and individual research projects, using their specialist technical skills.

This will include implementing new technology platforms and applications integral to the research projects of the CPC (for example, those associated with open access publishing and immersive XR technology) which address issues at an international scale using research as a driving force for global change.  It is also to develop ideas for generating income and to seek funding opportunities for routes to disseminate research findings that inform teaching and build the reputation of the University whilst advancing knowledge in the field.

The Job Description and Person Specification is available here.

WhatsApp Research Awards for Social Science and Misinformation

A person removing a mask

Via Moira Weigel. Deadline is 12/08/2018.

WhatsApp Research Awards for Social Science and Misinformation

WhatsApp cares about the safety of our users and is seeking to inform our understanding of the safety problems people encounter on WhatsApp and what more we can do within WhatsApp and in partnership with civil society to address the problem. For this first phase of our program, WhatsApp is commissioning a competitive set of awards to researchers interested in exploring issues that are related to misinformation on WhatsApp. We welcome proposals from any social science or related discipline that foster insights into the impact of technology on contemporary society in this problem space. The WhatsApp Research Awards will provide funding for independent research proposals that are designed to be shared with WhatsApp, Facebook, and wider scholarly and policy communities. These are unrestricted monetary awards that offer investigators the freedom to deepen and extend their existing research portfolio. Applications are welcome from individuals with established experience studying online interaction and information technologies, as well as from persons seeking to expand their existing research into these areas.

Core Areas of Exploration

We will seriously consider proposals from any social science and technological perspective that propose projects that enrich our understanding of the problem of misinformation on WhatsApp. High priority areas include (but are not limited to):

  • Information processing of problematic content: We welcome proposals that explore the social, cognitive, and information processing variables involved in the consumption of content received on WhatsApp, its relation to the content’s credibility, and the decision to promote that content with others. This includes social cues and relationships, personal value systems, features of the content, content source etc. We are interested in understanding what aspects of the experience might help individuals engage more critically with potentially problematic content.
  • Election related information: We welcome proposals that examine how political actors are leveraging WhatsApp to organize and potentially influence elections in their constituencies. WhatsApp is a powerful medium for political actors to connect and communicate with their constituents. However, it can also be misused to share inaccurate or inflammatory political content. We are interested in understanding this space both from the perspective of political actors and the voter base. This includes understanding the unique characteristics of WhatsApp for political activity and its place in the ecosystem of social media and messaging platforms, distribution channels for political content, targeting strategies, etc.
  • Network effects and virality: We welcome proposals that explore the characteristics of networks and content. WhatsApp is designed to be a private, personal communication space and is not designed to facilitate trends or virality through algorithms or feedback. However, these behaviors do organically occur along social dimensions. We are interested in projects that inform our understanding of the spread of information through WhatsApp networks.
  • Digital literacy and misinformation: We welcome proposals that explore the relation between digital literacy and vulnerability to misinformation on WhatsApp. WhatsApp is very popular in some emerging markets, and especially so among new to Internet and populations with lower exposure to technology. We are interested in research that informs our efforts to bring technology safely and effectively into underserved geographical regions. This includes studies of individuals, families and communities, but also wider inquiries into factors that shape the context for the user experience online.
  • Detection of problematic behavior within encrypted systems: We welcome proposals that examine technical solutions to detecting problematic behavior within the restrictions of and in keeping with the principles of encryption. WhatsApp’s end-to-end encrypted system facilitates privacy and security for all WhatsApp users, including people who might be using the platform for illegal activities. How might we detect illegal activity without monitoring the content of all our users? We are particularly interested in understanding and deterring activities that facilitate the distribution of verifiably false information.

Program Format

Our preference is for proposals based on independent research, in which the applicant develops conceptual tools, gathers and analyzes data, and/or investigates relevant issues. Each awardee will retain all intellectual property rights to their data and analyses. WhatsApp staff may provide guidance, but investigators are responsible for carrying out the scope of work.

The program will make unrestricted awards of up to $50,000 per research proposal. All applications will be reviewed by WhatsApp research staff, with consultation from external experts. Payment will be made to the proposer’s host university or organization as an unrestricted gift.

In addition to the award monies, WhatsApp invites award recipients to attend two workshops:

  1. The first workshop will provide awardees with a detailed introduction to how the WhatsApp product works as well as context on the focus area of misinformation. It will also enable participants to receive feedback from WhatsApp research staff and invited guests on their research proposals. We hope this will facilitate international collaborations across researchers and teams in this area. The tentative date for this event is October 29-30, in Menlo Park, CA.
  2. A second workshop will allow awardees to present their initial research findings to WhatsApp and other awardees, providing an opportunity to contextualize their findings with each other. Our hope is that upon completion of the research, award recipients will seek to share their research with the wider public. Tentative date is April 2019, exact date will be updated on this page at a later time.

WhatsApp will arrange and pay for the travel and accommodation of one representative from each awardee. This will be in addition to the research award amount.

Data

  • No WhatsApp data will be provided to award recipients;
  • All data from award research efforts will be owned by the researcher, and need not be shared with WhatsApp.

Applications, Eligibility & Participant Expectations

  • Applications must be written in English and include the following:
    • A research title, identification of the Principle Investigator (PI) and their institutional affiliation for the purposes of the proposed research;
    • A brief program statement (double-spaced, 12 point font, not to exceed 5 pages) that specifies the proposed work. This statement should include the following elements:
      • specification of question(s) being asked;
      • clear statement of the methodology together with examples of when/where this approach has given research insights;
      • plan for any data collection, analysis, and/or conceptual work;
      • description of the expected research outputs and findings;
      • relevance for our understanding of user experiences in online environments.
    • A 1-page bio and CV for the PI together with selected publication references. Summary bios of any other team members or collaborators.
    • A clear statement of the budget requested.
  • Preference will be given to research conducted in countries where WhatsApp is a prominent medium of communication (India, Brazil, Indonesia, Mexico, etc.).
  • Preference will be given to proposals from researchers, or collaborations with researchers, based in the country/countries being researched.
  • WhatsApp will accept applications from researchers who hold a PhD. In exceptional cases, we will review applications from individuals without PhD’s who have shown a high-level of achievement in social science or technological research.
  • The award is restricted to social science and technological research that contributes to generalized scientific knowledge and its application. Documentaries, journalism, and oral history projects are not eligible.
  • Awards will be made to an awardee’s university department, research institute or organization; all applicants must therefore be affiliated with an organization that supports research and can process external funding awards. All awards will be made in US dollars.
  • Proposals may be submitted by individuals with no prior experience in social media or Internet research. We welcome proposals from researchers who seek to expand their research portfolio into the area of information and communication technologies.
  • All award recipients are strongly encouraged to attend the two WhatsApp workshops associated with this program. Travel and accommodation will be arranged and paid for by WhatsApp.
  • The proposed research should be carried out by the date of the second workshop, in April 2019. Presentation materials that comprise the final report should be written in English and made available for WhatsApp and the other award recipients by the date of the final workshop. All rights to these materials will be held by the award recipient.
  • Once awardees have accepted their awards, WhatsApp will publicly share the details of the selected applicants by posting a summary of the results together with the PI’s name and the title of the proposal on the Facebook Research blog. This information may also be included in other presentations or posts relating to this effort.

By applying to this award, you are agreeing to the following:

  • You are affiliated with an institution that supports research and can process external funding awards.
  • If chosen, your institution will receive the award as a gift in US dollars and in the amount decided solely by WhatsApp.
  • You acknowledge that you have been invited to two, in-person, WhatsApp workshops (tentatively in October 2018 and April 2019).
  • You acknowledge that WhatsApp will publicly disclose your name and the proposal title as an award recipient.
  • You plan to attend and present the research findings at the second, WhatsApp workshop, likely to be held in Menlo Park, CA, USA in late April, 2019. The workshops and presentations will be conducted in English. Interpretation will be provided if needed. Note: airfare, hotel and transportation to be arranged and paid for by WhatsApp.

Timing and Dates

Applications are due by August 12, 2018, 11:59pm PST. Award recipients will be notified of the status of their application by email by September 14, 2018.

Questions

For all questions regarding these awards, please contact us.