Intelligent machines surprise us with unexpected behaviors, giving rise to the question of whether such machines exhibit autonomous judgment. With judgment comes (the allocation of) responsibility. While it can be dangerous or misplaced to shift responsibility from humans to intelligent machines, current frameworks to think about responsible and transparent distribution of responsibility between all involved stakeholders are lacking. A more granular understanding of the autonomy exhibited by intelligent machines is needed to promote a more nuanced public discussion and allow laypersons as well as legal experts to think about, categorize, and differentiate among the capacities of artificial agents when distributing responsibility. To tackle this issue, we propose criteria that would support people in assessing the Machine Capacity of Judgment (MCOJ) of artificial agents. We conceive MCOJ drawing from the use of Human Capacity of Judgment (HCOJ) in the legal discourse, where HCOJ criteria are legal abstractions to assess when decision-making and judgment by humans must lead to legally binding actions or inactions under the law. In this article, we show in what way these criteria can be transferred to machines.
Aurelia Tamo-Larrieux, Andrei Ciortea, Simon Mayer
15 Feb 2023