New journal article> A very public cull: the anatomy of an online issue public

Twitter

I am pleased to share that a paper that Rebecca Sandover, Steve Hinchliffe and I have had under review for some time has been accepted for publication. The paper comes from our project “Contagion”, which amongst other things examined the ways issue publics form and spread around public controversies – in this case the English badger cull of 2013/14. The research this article presents comes from mixed methods social media research, focused on Twitter. The methods and conversation have, of course, moved on a little in the last two years but I think the paper makes a contribution to how geographers in particular might think about doing social media-based research. I guess this, as a result, also fits into the recent (re)growth of ‘digital geographies’ too.

The article is titled “A very public cull: the anatomy of an online issue public” and will be published in Geoforum in the not-too-distant future. Feel free to get in touch for a pre-print version.

Abstract:

Geographers and other social scientists have for some time been interested in how scientific and environmental controversies emerge and become public or collective issues. Social media are now key platforms through which these issues are publicly raised and through which groups or publics can organise themselves. As media that generate data and traces of networking activity, these platforms also provide an opportunity for scholars to study the character and constitution of those groupings. In this paper we lay out a method for studying these ‘issue publics’: emergent groupings involved in publicising an issue. We focus on the controversy surrounding the state-sanctioned cull of wild badgers in England as a contested means of disease management in cattle. We analyse two overlapping groupings to demonstrate how online issue publics function in a variety of ways – from the ‘echo chambers’ of online sharing of information, to the marshalling of agreements on strategies for action, to more dialogic patterns of debate. We demonstrate the ways in which digital media platforms are themselves performative in the formation of issue publics and that, while this creates issues, we should not retreat into debates around the ‘proper object’ of research but rather engage with the productive complications of mapping social media data into knowledge (Whatmore 2009). In turn, we argue that online issue publics are not homogeneous and that the lines of heterogeneity are neither simple, or to be expected, and merit study as a means to understand the suite of processes and novel contexts involved in the emergence of a public. 

Pitching the ‘automative imagination’

Still from the video for All is Love by Bjork

I’ve got a draft book proposal. I think I know where it’s going. I’ve also had a go at securing funding (yes, I’m not holding my breath) to support writing the book and hopefully produce an associated podcast – more on that another time.

It’s perhaps foolhardy or overly optimistic but I want to share the gist of the pitch here. I’d really welcome feedback or suggestions and can share a fuller version of the proposal if you happen to be interested – please get in touch via email.

The Automative Imagination

Automation is both a contemporary and enduring concern. The ‘automative imagination’ is a way to articulating different habits of considering and discussing automation.  I am not using the neologism ‘automative’ to assert any kind of authority but rather as a pragmatic tool. Other words do not fit – to speak of an ‘automated’ or ‘automatic’ imagination does not describe the characteristics of automation but suggests the imagining is itself automated, which is not the argument I am seeking to make. This book explores how automation is imagined as much as it is planned and enacted. 

The ways in which automation is bound up with how everyday life is understood is under-examined. Expectations are fostered, with examples drawing upon popular culture and mythology, without the bases for these expectations being sufficiently scrutinised. This book examines precisely the foundations of the visions of automation we are invited to believe. Through the original conceptual lens of the ‘automative imagination’ I interrogate and thematically categorise the forms of imagination that underpin contemporary discussion and envisioning of automation. The contribution of this book is the identification and analysis of the double-bind between the widespread envisioning of an automated future, always-to-come, and the power of such visions, and those who propose them, over the ongoing projects to automate various aspects of contemporary life.

The book is organised around the theoretical framework, emerging from initial research, consisting of five figures: ‘progress’, the ‘machine’, the ‘master’/‘slave’, the ‘idiot’ and the ‘monster’. Each of these figures forms the spine of the four substantive chapters of the monograph. ‘Progress’ is popularly figured as an economic and socio-cultural force of ‘ages’, ‘revolutions’ and ‘waves’ often tied to particular technologies and plays out in cities, at work and at home. The apparatus of the ‘machine’ is often figured as the driver of change – the near-autonomous mechanisms of factories, governments and institutions are seen as both the engines and regulators of change. Members of society are figured, therefore, as either ‘master’ of or ‘enslaved’ by autonomous technology – both at work and at home. The apparent autonomy of these technologies is said to divorce citizens from knowledge of how to work and live, rendering them ‘idiots’, whilst at the same time the errors of these autonomous systems repeatedly feature in the news as somehow ‘idiotic’. Finally, and perhaps most enduringly, the abstract figure of technology as a ‘monstrous’ other to ‘the human’ occupies a significant place in the collective imagination.

The Automative Imagination demonstrates that automation, and how it functions, is imagined to focus in five interlinked geographical contexts: the city (or region), the home, the factory (or workplace), the institution and ‘in transit’. These interlinked geographies are not chapters themselves but rather form the fundamental context of the five thematic chapters of the monograph and features as central threads that weave together the conceptual narrative of the book. The contemporary ‘automative imagination’ seen through theoretical lens of the five figures and their interlinked geographical contexts is a paradox of fantasy and uniformity. The book concludes by arguing for developing more pluralised and situated imaginings of automation and offering resources for doing so.

The central methodological framework for the completion of this project is a critical reading across genres of key contemporary and archival texts. The Automative Imagination develops novel theoretical perspectives for investigating the formation of the ‘automative imagination’. These novel perspectives, organised through the five figures outlined above, are developed in the intersections of deconstruction as a method of critical thinking, feminist technology studies’ examinations of the social shaping of technology and pragmatist interrogations of the ‘everyday’ and ‘ordinary’.

This synthesis attends to the normative and situated nature of the ‘automative imagination’, through analyses of particular texts. The texts that form the basis of the analysis span three categories: academic, archival and film. Through preparatory research, a range of discourses of automation have already been identified within economics and the social sciences that provide some of the rationale for contemporary visions of automation. These are critically read together with archival newspaper and trade journal articles, novels and fiction & non-fiction films. The ground work for this analysis is already completed – identifying key sources and gathering archival materials and an initial systematic literature review of consultancy, non-governmental organisation and think tank reports.

The Automative Imagination speaks to a range of contemporary academic and policy agendas, in the UK, the EU and globally, not least the UK government’s and World Economics Forum’s ‘Fourth Industrial Revolution’ agendas. The contribution of the book is novel in the formulation of theoretical resources for understanding how automation is imagined and what work those imaginings is doing in the world.

Machine Learning ‘like alchemy’, not electricity

Holly from the UK TV programme Red Dwarf

In this Neural Information Processing System conference talk (2017), marking receiving the ‘test of time’ award, Ali Rahimi discusses the status of rigour in the field of machine learning. In response to Andrew Ng’s infamous “Artificial Intelligence is like electricity”, Rahimi retorts “machine learning is like alchemy”. I’ve embedded the talk below, it kicks in at the point Rahimi starts this bit of argument. I confess I don’t understand the maths talk it is embedded within but I think this embodies the best of ‘science’/ academia – cut the bullshit, talk about what we don’t know as much as what we do.

Reblog> How to Write a Peer Review for a Journal Article – Jack Gieseking

Via Jack Gieseking. Some excellent tips for new and experienced peer-reviewers alike, I think…

How to Write a Peer Review for a Journal Article

As an editorial collective member of ACME: An International Journal for Critical Geographies and as someone who once managed WSQ: Women’s Studies Quarterly for three years, I know how difficult it is to find appropriate and available peer reviewers. I often seek out graduate candidates (ABD students) who would offer that strong expertise but may not have the have reviewed journal articles or many journal articles before. I remember how awkward and nervous I was–and how many, many hours I devoted (oy)–when I wrote my first peer reviews.

Thanks to various search engines, I’ve read quite a few posts on how to write peer reviews. Many of them are written by publisherspeer review corporations (yeeghads!), or from other academics. These are all helpful in that they structure the work of peer review, but I found the former to be too detailed and formal, and then more anxiety-producing than clarifying. If they were brief, like the academic perspective, I found myself unclear about how they expected to accomplish each step. I’ve cobbled together my own thoughts about how to do a peer review that comes from my own experience as a gesture of support and solidarity for graduate students, postdocs, and early career researchers engaged in critical and radical research who wish to be part of the project of producing knowledge through peer review. My own take as a social scientist offers an organized response through the parts of–surprise!!–a social science paper, which I have not found mention of as of yet.

Before I go on, my first tip is that each peer review should take no more than two to six hours. If you spend the maximum number (6) on your early peer reviews, then that number should significantly decrease over time as you perfect your own approach to reviews. A six-hour review imagines you read the paper three times and then type up your notes. My second tip is that your entire review can be a page, preferably, or two pages long. You’re wondering if I’m serious but how would you feel if you wrote a 20-page paper over months and someone gave you five pages of single-spaced feedback? Exactly. One or two pages is a lot to chew on. Finally, as you read think about making summary comments and identifying trends (in style a la too many commas or overciting, in writing a la a vast absence of methods, etc.) rather than line edits.

Read the full blogpost.

 

Request for resources: Studying other researchers

Ethnographer in the film Kitchen Stories

I know plenty of other people study people who are themselves researchers, that’s more-or-less what STS folk do (in a sweeping generalisation), but I haven’t really read anything that discusses what this means for the research process…

I’m not sure if researching other researchers is necessarily ‘special’ or different but it does bring with it some peculiar considerations about how to negotiate disciplinarity in the context of others who inhabit close, perhaps cognate, bits of academia and research ‘cultures’.

So, what I’m after, if anyone actually reads this, is suggestions of fairly pragmatic (i.e. not ‘grand theory’) approaches to doing research about other researchers – the good, the bad and the ugly of that kind of fieldwork and research practice.

I ask because I keep periodically looking at ‘data’ I gathered from a second round of field work in Silicon Valley in 2011 which I’ve never meaningfully written up – I spent the time applying for jobs instead and the moment sort of passed… but I am interested in revisiting the ‘doing’ of the research perhaps because it was actually fairly uncomfortable for a number of reasons and I have my own unanswered questions about why, and what one might do differently…

P.S. please don’t reply on Twitter – I’m not checking it

Making space for failure – article by Harrowell, Davies & Disney

statue of a man holding his head with his right hand

An article about making space for failure in research in The Professional Geographer caught my eye in a Twitter post today. I’ve copied the title and abstract below, with a link to the article.

Making Space for Failure in Geographic Research

The idea that field research is an inherently “messy” process has become widely accepted by geographers in recent years. There has thus far been little acknowledgment, however, of the role that failure plays in doing human geography. In this article we push back against this, arguing that failure should be recognized as a central component of what it means to do qualitative geographical field research. This article seeks to use failure proactively and provocatively as a powerful resource to improve research practice and outcomes, reconsidering and giving voice to it as everyday, productive, and necessary to our continual development as researchers and academics. This article argues that there is much value to be found in failure if it is critically examined and shared, and–crucially–if there is a supportive space in which to exchange our experiences of failing in the field.

I really value the honesty that Elly Harrowell, Thom Davies and Tom Disney bring to their accounts of how hey dealt with perceived failures in their own research. It makes me think about all sorts of things from my own research.

First, it makes me think of my own shambolic and fairly short PhD fieldwork experience. In 2008, when doingwhat was  my first ‘proper’ fieldwork I was visiting lots of different labs and research centres, driving all over ‘Silicon Valley. I attended ‘meet-ups’ and open lectures at various tech campuses as a way of networking and recruiting participants. Two memorable things stand out: first – locking myself in a toilet in a large multinational tech company HQ and being unable to get out without someone calling security for me; and second, leaving my ethics forms in a different bag and then getting roundly told off by a research participant – which led onto an interesting discussion about their own research ethics.

If you are able to laugh at yourself I think it helps. This is something I’m honestly not that good at and remembering these things can be painful. However, as Harrowell, Davies and Disney all outline in their article – failure can be productive, it can lead to different insights and reveal things for and about your research you might otherwise not appreciate.

Second, the article reminded me of a theme in my PhD research, which I didn’t really pursue – lots of the people involved in tech R&D I interviewed talked about the hidden nature of failure, that it’s an important part of their work but that, because it doesn’t lead to reward, via papers and patents, it doesn’t really get discussed or made present. The sad thing about this is that several people talked about seeing projects at other institutions that repeated their own ‘mistakes’ or ‘failures’ several years later.

This is a common theme and there’s been some commentary about this in terms of ‘scientific progress’ – in the sense that if you don’t know that other people have tried something and it didn’t work then you may well unknowingly repeat an experiment that will fail. About eight years ago this was a theme also brought up in worries about the metricisation of scientific recognition and publications and a journal was proposed and set up called the Journal of Negative Results in BioMedicine.

One of the key arguments for ‘open’ research, I think, is that, if set in the right context, it offers a space for failure. Two key parts of that ‘right context’ stand out for me: First, if you are not punished (in terms of reward and recognition and wider measurements such as the UK REF) for saying ‘we tried this and it didn’t work’ more people might publish ‘negative results’. Second, others need to be able to access that information and ‘open’ research promises a means of facilitating that…

Read the article: Making Space for Failure in Geographic Research

The “Ethics and Governance of Artificial Intelligence Fund” commits $7.6M to research “AI for the public interest”

A press release from the Knight Foundation and another from Omidyar Network highlight their joint effort with several other funders to commit $7.6M to be distributed across several international institutions to research “AI for the public interest”. This seems like an ambitious and interesting research programme, albeit located in the elite institutions one might unfortunately expect to hoover up this sort of funding… Nevertheless, it will be interesting to see what comes of this.

Here’s some snippets (see the full Knight Foundation PR here and the Omidyar PR here).

…a $27 million fund to apply the humanities, the social sciences and other disciplines to the development of AI.

The MIT Media Lab and the Berkman Klein Center for Internet & Society at Harvard University will serve as founding academic institutions for the initiative, which will be named the Ethics and Governance of Artificial Intelligence Fund. The fund will support a cross-section of AI ethics and governance projects and activities, both in the United States and internationally.

The fund seeks to advance AI in the public interest by including the broadest set of voices in discussions and projects addressing the human impacts of AI. Among the issues the fund might address:

  • Communicating complexity: How do we best communicate, through words and processes, the nuances of a complex field like AI?
  • Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?
  • Advancing accountable and fair AI: What kinds of controls do we need to minimize AI’s potential harm to society and maximize its benefits?
  • Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?
  • Expanding the table: How do we grow the field to ensure that a range of constituencies are involved with building the tools and analyzing social impact?

Supporting a Global Conversation

  • Digital Asia Hub (Hong Kong): Digital Asia Hub will investigate and shape the response to important, emerging questions regarding the safe and ethical use of artificial intelligence to promote social good in Asia and contribute to building the fund’s presence in the region. Efforts will include workshops and case studies that will explore the cultural, economic and political forces uniquely influencing the development of the technology in Asia.
  • ITS Rio (Rio de Janeiro, Brazil): ITS Rio will translate international debates on artificial intelligence and launch a series of projects addressing how artificial intelligence is being developed in Brazil and in Latin America more generally. On behalf of the Global Network of Internet and Society Research Center, ITS Rio and the Berkman Klein Center will also co-host a symposium on artificial intelligence and inclusion in Rio de Janeiro, bringing together almost 80 centers and an international set of participants to address diversity in technologies driven by artificial intelligence, and the opportunities and challenges posed by it around the world.

Tackling Concrete Challenges

  • AI Now (New York): AI Now will undertake interdisciplinary, empirical research examining the integration of artificial intelligence into existing critical infrastructures, looking specifically at bias, data collection, and healthcare.
  • Leverhulme Centre for the Future of Intelligence (Cambridge, United Kingdom): Leverhulme Centre for the Future of Intelligence will be focused on bringing together technical and legal perspectives to address interpretability, a topic made urgent by the European Union’s General Data Protection Regulation coming into force next year.
  • Access Now (Brussels, Belgium): Access Now will contribute to the rollout of the General Data Protection Regulation by working closely with data protection authorities to develop practical guidelines that protect user rights, and educate public and private authorities about rights relating to explainability. The organization will also conduct case studies on data protection issues relating to algorithms and artificial intelligence in France and Hungary.

Bolstering Interdisciplinary Work 

  • FAT ML (Global): FAT ML will host a researcher conference focused on developing concrete, technical approaches to securing values of fairness, accountability, and transparency in machine learning.
  • Data & Society (New York): Data & Society will conduct a series of ethnographically-informed studies of intelligent systems in which human labor plays an integral part, and will explore how and why the constitutive human elements of artificial intelligence are often obscured or rendered invisible. The research will produce empirical work examining these dynamics in order to facilitate the creation of effective regulation and ethical design considerations across domains.

##

About the Ethics and Governance of Artificial Intelligence Fund 

The Ethics and Governance of Artificial Intelligence Fund aims to support work around the world that advances the development of ethical artificial intelligence in the public interest, with an emphasis on applied research and education. The fund was launched in January 2017, with an initial investment of $27 million from the John S. and James L. Knight Foundation, Omidyar Network, LinkedIn founder Reid Hoffman, the Hewlett Foundation, and Jim Pallotta. The activities supported through the fund aim to address the global challenges of artificial intelligence from a multidisciplinary perspective–grounded in data, code, and academic analysis. The fund will advance the public understanding of artificial intelligence and support the creation of networks that span disciplines and topics related to artificial intelligence. The Miami Foundation is serving as fiscal sponsor for the fund.

Via Alexis Madrigral.

Four Questions – a geography podcast

It’s taken me a while to get to this, somehow(?!), but I’ve been listening to Alice Evans‘ podcast “Four Questions“. Alice talks to interesting colleagues from across the discipline about their research for around 30 minutes. There are now seven episodes on soundcloud you can listen to, so maybe have a listen…

Here’s the first episode with Graham Denyer Willis:

I am aware that with Alice’s prodigious Twitter following some of the few folks who read this site may well have heard the podcast already – which is good 🙂

Thresholds

Clever people at York are talking about Thresholds. Check out the website, it’s really interesting!

Based in the Science & Technology Studies Unit (SATSU) at the University of York, Threshold is a thematic programme of work that will unfold over the coming months. Taking Thresholds as a focal point, this research programme will use a range of diverse resources and perspectives to explore the liminal edges of everyday, organisational and social life. What and who reside beyond or within different types of thresholds? Who has to cross thresholds? What prevents people or things crossing? How does power operate through thresholds? How is it that thresholds articulate with limits, extremes, dangers and tipping points? These are just some of the questions we will explore.

Aimed at generating ideas and dialogue, this programme is geared toward political, conceptual and creative exchanges and contributions. Led by Joanna LatimerRolland MunroNik Brownand Dave Beer, this programme will develop a variety of perspectives on this central focal point of thresholds. This website will be used to communicate our key ideas, to promote events and to share outputs.

“Masterclass with Melissa Gregg” at University of Bristol in Feb.

This looks really interesting. Mel Gregg has done some excellent work and is a good communicator so I’m sure this is a great opportunity…

Masterclass with Dr Melissa Gregg 

27th February 2017, 9:30–17:00

Dr Gregg is a leading world scholar in the field of gender, technology and critical management studies. She is best known for her ethnographic research of information professionals in the book Work’s Intimacy (Polity 2011), and as co-editor of the influential collection The Affect Theory Reader (with Gregory Seigworth, Duke 2010). Dr Gregg is currently working as a Principal Engineer at Intel Corporation and is exceptionally well placed to address the challenges in bridging the gap between organisational scholarship and practice.

This Masterclass is aimed at postgraduate students, academic staff and the wider community and will engage the participants in a critical, interdisciplinary debate on gender, subjectivity, organisations and organising. The day will be organised around recent themes in Dr Gregg’s work that explore technology, gender and culture in Silicon Valley; and methodologies for studying work and society.

Schedule for the Day

9.30-10.00am – Refreshments

10.00-11.30am – Counterproductive: The history of time management from a feminist perspective

11.30-12.00pm – Refreshments

12.00-1.30pm – Technology and the future of work

1.30-2.30pm – Lunch

2.30-4.30pm – Group Discussion: Gender, Culture and Methods

4.30-5.00pm – Refreshments Registration

The Masterclass is free and participants should register by emailing one of the organisers. Refreshments and lunch will be provided. Please also state any dietary requirements. Spaces are limited to 30 participants.

For further information, please contact one of the organisers.