Algorithms: Bias, Discrimination and Fairness.

Authors
Publication date
2019
Publication type
Other
Summary Algorithms are becoming more and more a part of our daily lives, as in the case of decision support algorithms (recommendation or scoring algorithms), or autonomous algorithms embedded in intelligent machines (autonomous vehicles). Deployed in many sectors and industries for their efficiency, their results are more and more discussed and contested. In particular, they are accused of being black boxes and of leading to discriminatory practices related to gender or ethnicity. The objective of this article is to describe the biases linked to algorithms and to outline ways to remedy them. In particular, we are interested in the results of algorithms in relation to equity objectives, and in their consequences in terms of discrimination. Three questions motivate this article: By what mechanisms can algorithmic biases occur? Can they be avoided? And, finally, can they be corrected or limited? In a first part, we describe how a statistical learning algorithm works. In a second part, we are interested in the origin of these biases, which can be of a cognitive, statistical or economic nature. In a third part, we present some promising statistical or algorithmic approaches to correct biases. We conclude the article by discussing the main societal issues raised by statistical learning algorithms such as interpretability, explainability, transparency, and accountability.
Topics of the publication
  • ...
  • No themes identified
Themes detected by scanR from retrieved publications. For more information, see https://scanr.enseignementsup-recherche.gouv.fr