Artificial intelligence systems now pervade our daily life. However, they are not axiologically neutral and the execution of their algorithms is opaque. In the legal field, issues arise mainly relating to the recognition of a machine as the perpetrator or victim of a crime, the prediction of an event, or the judicial activity in compliance with fundamental rights. After illustrating aspects of these problems, the work highlights the need for the use of machines to be subjected to a "meaningful human control", the requirements of which are specified.
In the current IT scenario, an increasing tendency towards digitalisation of the judiciary can be observed, as well as the substitution of homo juridicus with softwares. An apparent simplification and modernisation factor poses several questions when it comes to “replacing” sensitive activities, including the judicial evaluation on the kind and quantity of punishment in a single case. This paper aims to provide a comprehensive overview of evidence-based sentencing in the US criminal justice system, focusing on the algorithm evaluation of social dangerousness of the defendant. Looking at the topic from a comparative perspective, in Italy such tools would jeopardise the fair trial safeguards as well as some crucial principles of criminal procedure. Nevertheless, it can be explored the idea of introducing – within some boundaries and taking precautions – certain actuarial risk evaluation techniques in the sentencing process.
The use of automation in the decision-making processes has several values but the complexity of the machine – learning procedure would make the results difficult to predict. It is therefore necessary to ask whether the advent of new technologies requires a rethinking of the fundamental categories of law and the criminal trial.
In the United States predictive algorithms of recidivism are used in the ordinary unfolding of trials, not only in the pre-verdict phase, but also in sentencing. This paper aims to analyze algorithmic risk assessment tools’ structure, highlighting all the critical issues that connote these softwares. Particular emphasis will be placed on the inaccessibility of the instrument, which currently operates as a black box, but also on the fact that the outcome of such algorithms could bias the judge’s decision. Nevertheless, new artificial intelligence technologies, if properly understood and applied, could find room also in our system and in particular at the time of determining the content of the sanction, which could finally be understood as a project and not as mere retribution.