Ayuda
Ir al contenido

Dialnet


Matching code and law: achieving algorithmic fairness with optimal transport.

    1. [1] Max Planck Institute for Software Systems

      Max Planck Institute for Software Systems

      Regionalverband Saarbrücken, Alemania

    2. [2] Weierstrass Institute for Applied Analysis and Stochastics

      Weierstrass Institute for Applied Analysis and Stochastics

      Berlin, Stadt, Alemania

    3. [3] Law Humboldt University of Berlin, Unter den Linden 6, 10099, Berlin, Germany
  • Localización: Data mining and knowledge discovery, ISSN 1384-5810, Vol. 34, Nº 1, 2020, págs. 163-200
  • Idioma: inglés
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • Increasingly, discrimination by algorithms is perceived as a societal and legal problem. As a response, a number of criteria for implementing algorithmic fairness in machine learning have been developed in the literature. This paper proposes the continuous fairness algorithm (CFA θ) which enables a continuous interpolation between different fairness definitions. More specifically, we make three main contributions to the existing literature. First, our approach allows the decision maker to continuously vary between specific concepts of individual and group fairness. As a consequence, the algorithm enables the decision maker to adopt intermediate "worldviews" on the degree of discrimination encoded in algorithmic processes, adding nuance to the extreme cases of "we're all equal" and "what you see is what you get" proposed so far in the literature. Second, we use optimal transport theory, and specifically the concept of the barycenter, to maximize decision maker utility under the chosen fairness constraints. Third, the algorithm is able to handle cases of intersectionality, i.e., of multi-dimensional discrimination of certain groups on grounds of several criteria. We discuss three main examples (credit applications; college admissions; insurance contracts) and map out the legal and policy implications of our approach. The explicit formalization of the trade-off between individual and group fairness allows this post-processing approach to be tailored to different situational contexts in which one or the other fairness criterion may take precedence. Finally, we evaluate our model experimentally. [ABSTRACT FROM AUTHOR]


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno