CFP: Workshop on Trustworthy Algorithmic Decision-Making

Not sure where I found this, but it may be of interest…

Workshop on Trustworthy Algorithmic Decision-Making
Call for Whitepapers

We seek participants for a National Science Foundation sponsored workshop on December 4-5, 2017 to work together to better understand algorithms that are currently being used to make decisions for and about people, and how those algorithms and decisions can be made more trustworthy. We invite interested scholars to submit whitepapers of no more than 2 pages (excluding references); attendees will be invited based on whitepaper submissions. Meals and travel expenses will be provided.

Online algorithms, often based on data-driven machine-learning approaches, are increasingly being used to make decisions for and about people in society. One very prominent example is the Facebook News Feed algorithm that ranks posts and stories for each person, and effectively prioritizes what news and information that person sees. Police are using “predictive policing” algorithms to choose where to patrol, and courts are using algorithms that predict the likelihood of repeat offending in sentencing. Face recognition algorithms are being implemented in airports in lieu of ID checks. Both Uber and Amazon use algorithms to set and adjust prices. Waymo/Google’s self-driving cars are using Google maps not just as a suggestion, but to actually make route choices.

As these algorithms become more integrated into people’s lives, they have the potential to have increasingly large impacts. However, if these algorithms cannot be trusted to perform fairly and without undue influences, then there may be some very bad unintentional effects. For example, some computer vision algorithms have mis-labeled African Americans as “gorillas”, and some likelihood of repeat offending algorithms have been shown to be racially biased. Many organizations employ “search engine optimization” techniques to alter the outcomes of search algorithms, and “social media optimization” to improve the ranking of their content on social media.

Researching and improving the trustworthiness of algorithmic decision-making will require a diverse set of skills and approaches. We look to involve participants from multiple sectors (academia, industry, government, popular scholarship) and from multiple intellectual and methodological approaches (computational, quantitative, qualitative, legal, social, critical, ethical, humanistic).

Whitepapers

To help get the conversation started and to get new ideas into the workshop, we solicit whitepapers of no more than two pages in length that describe an important aspect of trustworthy algorithmic decision-making. These whitepapers can motivate specific questions that need more research; they can describe an approach to part of the problem that is particularly interesting or likely to help make progress; or they can describe a case study of a specific instance in the world of algorithmic decision-making and the issues or challenges that case brings up.

Some questions that these whitepapers can address include (but are not limited to):

  • What does it mean for an algorithm to be trustworthy?
  • What outcomes, goals, or metrics should be applied to algorithms and algorithm-made decisions (beyond classic machine-learning accuracy metrics)?
  • What does it mean for an algorithm to be fair? Are there multiple perspectives on this?
  • What threat models are appropriate for studying algorithms? For algorithm-made decisions?
  • What are ways we can study data-driven algorithms when researchers don’t always have access to the algorithms or to the data, and when the data is constantly changing?
  • Should algorithms that make recommendations be held to different standards than algorithms that make decisions? Should filtering algorithms have different standards than ranking or prioritization algorithms?
  • When systems use algorithms to make decisions, are there ways to institute checks and balances on those decisions? Should we automate those?
  • Does transparency really achieve trustworthiness? What are alternative approaches to trusting algorithms and algorithm-made decisions?

Please submit white papers along with a CV or current webpage by October 9, 2017 via email to trustworthy-algorithms@bitlab.cas.msu.edu. We plan to post whitepapers publicly on the workshop website (with authors’ permission) to facilitate conversation ahead of, at, and after the workshop. More information about the workshop can be found at http://trustworthy-algorithms.org.

We have limited funding for PhD students interested in these topics to attend the workshop. Interested students should also submit a whitepaper with a brief description of their research interests and thoughts on these topics, and indicate in their email that they are PhD students.

(Visited 87 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.