Decisions that were once principally made by humans are now increasingly being made by algorithms. Public and private decision makers alike might draw on machine learning systems or other artificially intelligent applications to complement or replace human judgment. These kinds of algorithms often have an ‘unexplainable’ architecture: it is usually impossible to review the reasons for which a machine learning algorithm, for example, reaches a particular conclusion. This generates a potential justice problem. Individuals adversely affected by algorithmic decisions might be unable to successfully seek redress through the legal system when the reasons for which a decision has been made is unknowable. This paper will aim to introduce and delineate the scope of this potential justice problem. It will suggest that the use of machine learning systems by public decision makers carries with it the threat that such decisions will become inscrutable and immune from review. The paper will conclude by defending the position that the inscrutability of machine learning systems undermines procedural fairness and endangers the right of individuals to seek redress. Regulation must be developed to counter such effects.
Midis-conférences
Justice as Explanation : the right to reasons in algorithmic decision making
Midis-conférences des Jeunes Chercheurs CRDP (2020-2021)
" La justice dans tous ses états "
En ligne
En ligne