Skip to content
  • «
  • 1
  • »

The search returned 3 results.

AI Governance: Digital Responsibility as a Building Block journal article

Towards an Index of Digital Responsibility

Eva Thelisson, Jean-Henry Morin, Johan Rochel

Delphi - Interdisciplinary Review of Emerging Technologies, Volume 2 (2019), Issue 4, Page 167 - 178

The rapid development of AI-based technologies significantly impacts almost all human activities as they are tied to already existing underlying systems and services. In order to make sure that these technologies are at least transparent if not provably beneficial for human beings and society and represent a true progress, AI governance will play a key role. In this paper, we propose to reflect on the notion of ‘digital responsibility’ to account for the responsibility of economic actors. Our objective is to provide an outline of what digital responsibility is and to propose a Digital Responsibility Index to assess corporate behavior. We argue that a Digital Responsibility Index can play a central role in restoring trust in a data-driven economy and create a virtuous circle, contributing to a sustainable growth. This perspective is part of AI governance because it provides a concrete way of quantifying the implementation of AI principles in corporate practice.


The Right to an Explanation journal article

An Interpretation and Defense

Maël Pégny, Eva Thelisson, Issam Ibnouhsein

Delphi - Interdisciplinary Review of Emerging Technologies, Volume 2 (2019), Issue 4, Page 161 - 166

The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and prompted the creation of a research subdomain, Explainable Artificial Intelligence (XAI). Opacity would be particularly problematic if those methods were used in the context of administrative decision-making, since most democratic countries grant to their citizens a right to receive an explanation of the decisions affecting them. If this demand for explanation were not satisfied, the very use of AI methods in such contexts might be called into question. In this paper, we discuss and defend the relevance of an ideal right to an explanation. It is essential both for the efficiency and accountability of decision procedures, both for public administration and private entities controlling the access to essential social goods. We answer several objections against this right, which pretend that it would be at best inefficient in practice or at worst play the role of a legal smokescreen. If those worst-case scenarios are definitely in the realm of possibilities, they are by no means an essential vice of the right to an explanation. This right should not be dismissed, but defended and further studied to increase its practical relevance.


  • «
  • 1
  • »