Confirmed Speakers

  • Sihem Amer Yahia LIG-CNRS, Grenoble FR, Shady Elbassuoni, American University of Lebanon, LE, Exploring Fairness of Ranking in Online Job Marketplaces
  • Nick Mattei, Cornell University, USA, The Hidden Assumptions Behind Counterfactual Explanations and Principal Reasons
  • Khaled Belhacene, HEUDIASYC-CNRS, Université de Technologie de Compiègne (with Nicolas Maudet, LIP6-CNRS), FR, Argumentation for Accountability
  • Vassilis Christophidis, Institute for Advanced Studies, Université de Cergy, FR and University of Crete, GR (with Nikolaos Myrtakis, University of Crete, GR and Eric Simon, SAP, FR), Benchmarking Outlier Explanation Algorithms 
  • Rachel Cummings, Georgia Tech, USA, Differential Privacy for Dynamic Databases
  • Ulle Endriss, ILLC, University of Amsterdam, NL, Algorithmic Explainability and Justifiability of Collective Decisions
  • Kira Goldner, Columbia University, USA (with Nicole Immorlica and Brendan Lucier), Reducing Inefficiency in Carbon Auctions with Imperfect Competition
  • Daniel Le Metayer, INRIA Lyon, FR, (with Clément Henin), Accountability requirements for algorithmic decision systems
  • Andrea Loreggia, Università di Padova, IT, Towards a SAFE Artificial Intelligence
  • Pierre Marquis, CRIL, CNRS & Université d’Artois, Institut Universitaire de France, From Explanations to Intelligible Explanations
  • Claire Mathieu, IRIF – CNRS, FR, Algorithmic Fairness Through Linear Regression
  • Nicholas Mattei, Tulane University, USA (with Francesca Rossi), Building Ethically Bounded Artificial Intelligence
  • Fred Roberts, DIMACS, Rutgers University, USA, Socially Responsible Facial Recognition of Animals
  • Marija Slavkovik, University of Bergen, NO, AI ethics and Interaction Design
  • Fabien Tarissan, ISP, ENS Paris Saclay, Recommendation algorithms and information diversity: how to analyze the impact of algorithms in online platforms?
  • Alexis Tsoukiàs, LAMSADE – CNRS, PSL, Université Paris Dauphine, Algorithmic Fairness: Fair for whom? 
  • Suresh Venkatasubramanian, University of Utah, USA, Decentering the tech: reformulating questions of algorithmic fairness
  • Elisabeth Williams, 3A Institute, Australian National University (with S. Backwell, G. Bell, K. A. Daniell, J. Debs, Z. Hatfield Dodds, A. Meares, F. Millman, M. Phillipps, O. Reeves), (De)constructing futures: Integrating social responsibility into algorithmic design
  • Noel Derwort, Australian National University, AU, The difference between algorithmic design assumptions and reality for human responses: an aviation lens
  • Vitalii Emelianov, INRIA Lyon, FR, Fairness in Multistage Selection
  • Nikos Myrtakis, University of Crete, GR, Towards a Predictive Explanation of Outliers
  • Raphael Ettedgui, LAMSADE-CNRS, PSL, Université Paris Dauphine, Robustness to adversarial attacks under a game theory perspective

Abstracts

Sihem Amer Yahia LIG-CNRS, Grenoble FR, Shady Elbassuoni, American University of Lebanon, LE, Exploring Fairness of Ranking in Online Job Marketplaces

In this talk, I will present a study on fairness of ranking in online job marketplaces. We focus on group fairness and aim to algorithmically explore how a scoring function, through which individuals are ranked for jobs, treats different demographic groups. To quantify the fairness of a ranking of a group of individuals, we formulate an optimization problem to find a partitioning of those individuals on their protected attributes that exhibits the highest unfairness with respect to the scoring function. Since the number of ways to partition individuals is exponential in the number of their protected attributes, we propose a heuristic algorithm to navigate the space of all possible partitionings to identify the one with the highest unfairness. We evaluate our algorithm using a simulation of a crowdsourcing platform and a real dataset crawled from the online job marketplace TaskRabbit. 

Nick Mattei, Cornell University, USA,The Hidden Assumptions Behind Counterfactual Explanations and Principal Reasons

Counterfactual explanations are gaining prominence within technical, legal, and business circles as a way to explain the decisions of a machine learning model. These explanations share a trait with the long-established “principal reason” explanations that are required by U.S. credit laws: they both explain a decision by highlighting a set of features deemed most relevant—and withholding others.  These “feature-highlighting explanations” have several desirable properties: They place no constraints on model complexity, do not require model disclosure, provide a justification for a model’s decision or instructions for achieving a different decision, and seem to automate compliance with the law. But they are far more complex and subjective than they appear. In this paper, we demonstrate that the utility of feature-highlighting explanations relies on a number of easily overlooked assumptions that are rarely justified: that the recommended change in feature values clearly maps to real-world actions, that features can be made commensurate by looking only at the distribution of the training data, and that features are only relevant to the decision at hand. They also largely depend on the underlying model being stable over time, monotonic, and limited to binary outcomes. While new research suggests several ways that feature-highlighting explanations can work around some of the problems that we identify, the disconnect between features in the model and actions in the real world—and the subjective choices necessary to compensate for this—must be understood before these techniques can be usefully implemented.

Khaled Belhacene, HEUDIASYC-CNRS, Université de Technologie de Compiègne (with Nicolas Maudet, LIP6-CNRS), FR, Argumentation for Accountability

“Intelligent” systems that provide assistance in decision-aiding processes are becoming pervasive. In order to equip them with the capability of accounting for their recommendations, we propose to consider a deductive approach, where normative and declarative constraints give shape to necessary, possible, or impossible recommendations. The formal certificates of infeasibility of the underlying inverse problem can then be interpreted as arguments supporting the recommendation, on the ground of the principles describing the normative stance and the previous statements during a dialogue. These arguments could be used to address accountability issues such as procedural regularity, or contestability.

Vassilis Christophidis, Institute for Advanced Studies, Université de Cergy, FR and University of Crete, GR (with Nikolaos Myrtakis, University of Crete, GR and Eric Simon, SAP, FR), Benchmarking Outlier Explanation Algorithms 

Detection of anomalies (i.e., outliers) in multi-dimensional data (e.g., in scientific, IoT or system monitoring applications) is a well-studied subject in data mining. Unfortunately, off-the-shelf detectors provide no explanation about why a data point was considered as abnormal or which of its features (i.e. subspaces) exhibit the best separability from the normal data. Such outlier explanations are crucial to diagnose the root cause of data anomalies and enable corrective actions to prevent or remedy their effect in downstream data modeling. In this work, we present a comprehensive framework for benchmarking different unsupervised outlier explanation algorithms that are domain and detector-agnostic in order to uncover several missing insights from the literature such as: (a) Is it possible to plug and play existing explanation algorithms to any off-the-shelf outlier detector? (b) How the behavior of outlier detection and explanation pipelines is affected by the number or the correlation of features in datasets? and (c) What is the quality of summarization when points are explained by subspaces of different dimensionality?

Rachel Cummings, Georgia Tech, USA, Differential Privacy for Dynamic Databases

Privacy concerns are becoming a major obstacle to using data in the ways we want. How can data scientists make use of potentially sensitive data, while providing rigorous privacy guarantees to the individuals who provided data? Over the last decade, differential privacy has emerged as the de facto gold standard of privacy preserving data analysis.  Differential privacy ensures that an algorithm does not overfit to the individuals in the database by guaranteeing that if any single entry in the database were to be changed, then the algorithm would still have approximately the same distribution over outputs. 

In this talk, we will briefly survey the definition and properties of differential privacy, and then focus on recent advances in differential privacy for dynamic databases, where the content of the database evolves over time as new data are acquired.  First, we will see how to extend differentially private algorithms for static databases to the dynamic setting, with relatively small loss in the privacy-accuracy tradeoff. Next, we see algorithms for privately detecting changes in data composition. We will conclude with a discussion of open problems in this space, including the use of differential privacy for other types of data dynamism.

Ulle Endriss, ILLC, University of Amsterdam, NL, Algorithmic Explainability and Justifiability of Collective Decisions

Given the preferences of several agents over a set of alternatives, how can we justify the selection of some « best » alternative? A good justification would consist in a step-by-step explanation, with each step being grounded in some fundamental normative principle. In this talk, I will discuss the challenge of generating such justifications automatically. I will present initial results recently obtained in joint work with Arthur Boixel (Amsterdam). I will also discuss connections to earlier joint work with Olivier Cailloux (Paris). 

Kira Golden, Columbia University, USA (with Nicole Immorlica and Brendan Lucier), Reducing Inefficiency in Carbon Auctions with Imperfect Competition

We study carbon cap-and-trade schemes: a policy tool used to distribute licenses and control the social cost of pollution. A license grants its holder the right to pollute a unit of carbon. We focus on the allocation mechanism most commonly deployed in practice: the uniform price auction with a price floor and/or ceiling. This mechanism is not truthful; we quantify the strategic vulnerabilities of this mechanism.  We motivate a benchmark and show how to choose a number of licenses and a high enough price floor such that, even under worst-case strategic behavior, a mechanism from this class can still obtain a constant-factor approximation to our benchmark. 

Daniel Le Metayer, INRIA Lyon, FR, (with Clément Henin), Accountability requirements for algorithmic decision systems

In this talk, we will discuss different dimensions of accountability covering all the phases of the life cycle of algorithmic decision systems. We will illustrate this analysis with an interactive approach to explanations seen as one essential dimension of accountability.

Andrea Loreggia, Università di Padova, IT, Towards a SAFE Artificial Intelligence

As technology became more and more pervasive in our everyday life new questions arise about Security, Accountability, Fairness, and Ethics (SAFE). These concerns are about all the realities that are involved or committed in designing, implementing, deploying and using the technology. The talk addresses such concerns by presenting a set of practical obligations and recommendations for the development of applications and systems based on Artificial Intelligence (AI) techniques. These are derived from a definition of rights resulting from principles and ethical values rooted in the fundamental documents of our social organisation.

Pierre Marquis, CRIL, CNRS & Université d’Artois, Institut Universitaire de France, From Explanations to Intelligible Explanations

Explainability can be defined as the degree to which a human being can understand a decision.  Explainability has become a major problem in AI for a couple of years, targeting decisions that are generated automatically by programs, especially classifiers and other pieces of software based on machine learning models. In my talk, I will explain why providing intelligible explanations is a hard task in the general case.  I will insist on the intelligibility issue which concentrates a part of the difficulty, and relies on the fact that defining what a « good » explanation is does not solely concern what should be explained (the explanandum), but also depends on who receives the corresponding explanans. I will present some preliminary results showing how this dimension can be taken into account in an explanation process.

Claire Mathieu, IRIF – CNRS, FR, Algorithmic Fairness Through Linear Regression

Suppose you learn from data by using ordinary linear regression in order to select a certain number of students from a set of applicants, who may be Majority or Minority applicants. How do you maximize the average success rate of the students selected, while also fighting bias? We show that, assuming a simple theoretical model for the bias in application folders, taking into account various dimensions on the data may simultaneously improve the average success rate and the diversity of the pool selected. We propose a simple optimal algorithm. This is joint work with Vincent Cohen-Addad and Namrata.

Nicholas Mattei, Tulane University, USA (with Francesca Rossi), Building Ethically Bounded Artificial Intelligence

The more AI agents are deployed in scenarios with possibly unexpected situations, the more they need to be flexible, adaptive, and creative in achieving the goal we have given them. Thus, a certain level of freedom to choose the best path to the goal is inherent in making AI robust and flexible enough. At the same time, however, the pervasive deployment of AI in our life, whether AI is autonomous or collaborating with humans, raises several ethical challenges. AI agents should be aware and follow appropriate ethical principles and should thus exhibit properties such as fairness or other virtues. These ethical principles should define the boundaries of AI’s freedom and creativity. However, it is still a challenge to understand how to specify and reason with ethical boundaries in AI agents and how to combine them appropriately with subjective preferences and goal specifications. Some initial attempts employ either a data-driven example-based approach for both, or a symbolic rule-based approach for both. We envision a modular approach where any AI technique can be used for any of these essential ingredients in decision making or decision support systems, paired with a contextual approach to define their combination and relative weight. In a world where neither humans nor AI systems work in isolation, but are tightly interconnected, e.g., the Internet of Things, we also envision a compositional approach to building ethically bounded AI, where the ethical properties of each component can be fruitfully exploited to derive those of the overall system. In this paper we define and motivate the notion of ethically bounded AI, we describe two concrete examples, and we outline some outstanding challenges.

Fred Roberts, DIMACS, Rutgers University, USA, Socially Responsible Facial Recognition of Animals

Today facial recognition is changing policing, medicine, commerce and many other areas, with the potential for dramatic benefits. However, there are problems because certain uses of facial recognition technology increase the risk of outcomes that are biased; there is the risk of new intrusions into people’s privacy; and there is the threat of mass surveillance encroaching on democratic freedoms.

Biometric identification of individual animals, both domesticated and wild, is increasingly widespread. The potential benefits include prevention of spread of animal diseases, ways to address world hunger, and assistance in understanding animal behavior and measuring biodiversity, not to mention potentially lucrative economic developments. However, the potential dangers include injury to animals, disease spread arising from inaccurate identification, miscalculation of wild animal populations, and economic loss from dependence on animal identification algorithms. There is very little discussion of these potential dangers.

This talk will survey uses of facial recognition and related algorithms for identification of domesticated and wild animals and will discuss the potential dangers of such algorithms and the approaches needed to protect against negative consequences. It seeks to bring a socially responsible perspective to these issues.

Marija Slavkovik, University of Bergen, NO, AI ethics and Interaction Design

Numerous ethics and legal guidelines specify that an autonomous system should be supervised or controlled by a human or be inspectable by a human. A human-computer system necessarily involves an interface for interaction. It is well documented that the interface and interaction design can be used to persuade or nudge people towards particular choices and as such can themselves have a profound impact on the overall ethical behaviour of the automated system. Yet how should these interfaces be designed remains outside of the AI ethics conversation. We argue that ethic guidelines for autonomous systems are meaningless without interaction design specifications and also give an overview of systems and applications where interaction design should be given special consideration by regulators. 

Fabien Tarissan, ISP, ENS Paris Saclay, Recommendation algorithms and information diversity: how to analyze the impact of algorithms in online platforms?

Whether through a problematic related to information ranking (e.g. search engines) or content recommendation (on social networks for instance), algorithms are at the core of processes selecting which information is made visible. Those algorithmic choices have de facto a strong impact on user’s activity and therefore on their access to information. This raises the question of measuring the quality of the choices made by algorithms and their impact on the users. In this presentation, we will discuss how to exploit the network structure generated by user’s activity in order to reveal the diversity of the information accessed by them.

Alexis Tsoukiàs, LAMSADE – CNRS, PSL, Université Paris Dauphine, Algorithmic Fairness: Fair for whom? 

The talk aims at addressing three provocations. First: why algorithms should be fair if the society is not? Why they should behave differently from what our societies do and have chosen democratically to do? Second: we survey fairness as studied in economics and social choice theory. Besides well-known impossibility results we address the subjective dimension of fairness this depending, on the one hand, from the decision makers and on the other hand from the subjects of the decisions. Fairness is a policy addressing subjective values. Third: several recent attempts to formalise fairness state: “similars should be treated similarly”. However, if similarity needs to be measured, we need appropriate empirical observations on which ground measures. The problem here is that clustering the population along “objective features” (age, gender, income etc.) fails to be relevant for any policy of fairness. Should we interpret instead fairness as freedom of choice? 

Suresh Venkatasubramanian, University of Utah, USA, Decentering the tech: reformulating questions of algorithmic fairness

The many algorithmic formulations of fairness in automated decision-making focus on aggregate notions of fairness and aim to deliver solutions that are “fair” under these definitions. But what of the individuals affected by these solutions? I will present examples of how centering the affected individuals changes the mathematical formulations of fairness that we might investigate, leading to new mathematical and algorithmic challenges. 

Elisabeth Williams, 3A Institute, Australian National University (with S. Backwell, G. Bell, K. A. Daniell, J. Debs, Z. Hatfield Dodds, A. Meares, F. Millman, M. Phillipps, O. Reeves), (De)constructing futures: Integrating social responsibility into algorithmic design

When exploring our algorithmic past, traces of human decision-making can be found etched in lines of code – in naming conventions, in cryptic comment lines, and increasingly, captured within data used to generate algorithms capable of mimicking human decision making at scale. And yet, algorithmic systems – once compiled, uploaded, and embodied in physical objects that are capable of shaping our lives – no longer bear clear signs of these human imprints.  In this talk, we will present some results from an educational experiment being used to reimagine how we create these systems and discuss how this work relates to a larger mission to create a new branch of engineering to help artificial intelligence go safely and equitably to scale. 

PhD Students talks.

Noel Derwort, Australian National University, AU, The difference between algorithmic design assumptions and reality for human responses: an aviation lens

Commercial aircraft operations and incidents, such as the Boeing 737 Max accidents, have highlighted the potential benefits, challenges and limitations of incorporating algorithms into complex systems such as commercial aircraft. When the physical aircraft, the crew and the operating environment are combined this increases problem complexity. This presentation will highlight a contributing factor in all studied scenarios is the underlying assumptions of pilot responses and capabilities during algorithmic design.

Vitalii Emelianov, INRIA Lyon, FR, Fairness in Multistage Selection

The rise of algorithmic decision making led to active researches on how to define and guarantee fairness, mostly focusing on one-shot decision making. In several important applications such as hiring, however, decisions are made in multiple stages with additional information at each stage. In such cases, fairness issues remain poorly understood.

Nikos Myrtakis, University of Crete, GR, Towards a Predictive Explanation of Outliers

Outlier explanation aims to reveal the subset(s) of features where outliers significantly deviate w.r.t. the rest of inliers. Recent works focus on discovering feature subsets that a given set of outliers exhibit a maximum outlierness score. Such descriptive explanations need to be computed every time new outliers are discovered.  In this work we are focusing on minimal subset(s) of features that describe the decision boundary of an unsupervised outlier detector when scoring outliers, i.e., on predictive explanations that could potentially explain new outlier points.

Raphael Ettedgui, LAMSADE-CNRS, PSL, Université Paris Dauphine, Robustness to adversarial attacks under a game theory perspective

We are interested to adversarial attacks at machine learning models such as neural networks. We use a game theory approach and consider mixed strategies under a robustness perspective.