November, 22

Amphi - Richelieu


Algorithmic Decision-Making: Fairness, Bias and
the Role of Ethics Standards

Ansgar Koene (University of Nottingham)

The UnBias project is investigating the issues of algorithm bias/fairness through a combination of focus group debates with 12-17 years old digital natives; experiments with users on the role of context and transparency for judgments about algorithm fairness; and multi-stakeholder workshops to bring together perspectives from academia, industry and civil society.

I will present results from the UnBias project and discuss how they are informing our efforts to produce recommendations regarding regulation, design, and education related to algorithmic decision making. An important aspect of this is the development of the IEEE P7003 Standard for Algorithmic Bias Considerations, which is part of the IEEE Global Initiative on Ethical Considerations in Artificial Intelligence and Autonomous Systems. (presentation)

A Sociological View on Personalized Prediction
and Machine Learning Methods

Dominique Cardon (Science-Po)

One of the main characteristics of the modes of computation known as big data concerns the generalization of machine learning methods. They offer to calculate the society in a way that does not match the requirements of centrality, univocity, and generality of statistical methods, plotting individuals around a statistical mean. In digitized environments, the proliferation of data recordings leads to a massive increase in the number of variables that are available for computation. Even if matrices within which those variables are computed remain empty, calculations continue to follow the notion that, in certain contexts, rare and improbable variables may have some effect on some correlations.

This paradigm thus revives inductive techniques of data analysis and avoids engaging in the reduction and stabilization of the space of relevant variables. Causes thus become inconstant and get combined by the computer in changing ways, depending on the local objectives imposed by the various users that seek to predict their environment. This shift towards personalized prediction implies that the causes of individual behaviors become much more uncertain.

The recording of multiple, disparate behaviors may, in certain circumstances, depending on the context, produce a causality that is sufficient to explain, in a relevant manner, the acts of individuals. The promise of a predictive society is a challenge for our societies: what freedom is left to the choices of individuals? How far can we customize without undoing society? How can we understand and regulate the decisions of new calculators?

Discrimination in Machine Decision Making

Krishna Gummadi (Max Planck Institute)

Machine (data-driven learning-based) decision making is increasingly being used to assist or replace human decision making in a variety of domains ranging from banking (rating user credit) and recruiting (ranking applicants) to judiciary (profiling criminals) and journalism (recommending news-stories).

Recently concerns have been raised about the potential for discrimination and unfairness in such machine decisions. Against this background, in this talk, I will pose and attempt to answer the following high-level questions:

- How do machines learn to make discriminatory decision making?
- How can we quantify discrimination in machine decision making?
- How can we control machine discrimination? i.e., can we design learning mechanisms that avoid discriminatory decision making?
-Is there a cost to non-discriminatory decision making?

Responsibility and Accountability in Algorithms Systems – AI Policy Perspective

Jonathan Sage (IBM, AI Policy Lead European Union)

For cognitive systems to fulfill their world-changing potential, it is vital that data subjects and organisations (including governments) have confidence in their recommendations, judgments and uses.

It is important to make clear when and for what purposes AI is being applied in solutions that are being developed.  Governments have an important role in establishing frameworks within AI can be used and can be further developed for societal benefit.

Trusted Smart Statistics in the Era of
Algorithmic Decision-Making

Emanuele Baldacci (Eurostat)

The aim of this presentation is to raise awareness about the role of official statistics in a post-truth society and a world of algorithmic decision-making. It will particularly highlight the action plan and roadmap of the European Statistical System in transitioning from piloting the use of new data sources (such as Big Data) in statistical production, towards the development of trusted smart statistics. This will be based on data capturing, processing and analysis which will be embedded seamlessly in the statistical production system, enabling near real-time policy monitoring and feedback.

The first Proofs of Concept are expected to be developed in 2018 while further investment is foreseen in the medium term. This is a highly interdisciplinary and multi-stakeholder journey involving statisticians, industry, academia, policymakers, and citizens.