Inteligencia artificial y derechos fundamentales
Publicado el Informe de la Agencia de la Unión Europea (FRA)
Artificial Intelligence (AI) and its implications for fundamental rights undoubtedly represents one of the main challenges for our society. In fact, while much attention has so far been paid to the potential of this new science to support economic growth, the question of how it may affect fundamental rights has been less investigated.
Based on this assumption, the European Union Agency for Fundamental Rights (FRA) has drawn up a report that identifies the risks associated with the use of AI, with specific regard to four different areas (such as social benefits, predictive policing, health services and targeted advertising). The study was based on more than 100 interviews with representatives of public and private organisations as well as experts – including members of supervisory authorities, non-governmental organisations and lawyers – who deal with AI in their respective field.
As a result of the survey, the Agency has identified the fundamental rights issues that need to be taken into account by policymakers – both at Member State and EU level – and has submitted specific recommendations to ensure the successful and responsible use of AI. These are as follows:
- consider the full scope of fundamental rights with respect to AI, as enshrined in the Nice Charter and Treaties, which are applicable depending on the context in which AI is used. Research has shown that there is not full awareness and, consequently, consideration of all the fundamental rights that intelligent systems are likely to affect;
- assess in advance the impact of AI on fundamental rights in order to reduce negative effects, as practice so far reveals that such an assessment has mainly focused on technical aspects (in this regard, the report therefore suggests using checklists or self-evaluation tools);
- ensure an effective and reliable control system to monitor and, if necessary, manage any negative impact of AI systems on fundamental rights, also by making use of pre-existing structures (e.g. Data Protection Authorities) and providing these bodies with adequate resources and powers as well as the necessary expertise to prevent possible violations and offer effective support to victims;
- include specific safeguards to avoid discrimination when using AI, encouraging public and private organisations to assess in advance the discriminatory potential of the software used, also through the provision of funding by the European Commission and the Member States, since the results of the survey revealed that awareness of the risk of discrimination by AI is relatively low;
- provide more guidance on data protection and, in particular, on the scope and meaning of the legal provisions regarding automated decision-making, as these are crucial aspects in the development and use of AI and are still subject to a high level of uncertainty;
- guarantee access to national justice to challenge decisions based on AI systems and that this possibility is “effective in practice as well as in law”. This aspect proves to be quite problematic, since it presupposes that the functioning of the software is accessible to the individual. As the report points out, this is often precluded due, on the one hand, to opposition from its developers of trade secrets and, on the other hand, to the inherent complexity of new generation algorithms that may not be decipherable even by the same developers; however, in order to address such difficulties, the Agency invites the EU and the Member States to elaborate guidelines to ensure transparency in this area and to consider the introduction of an obligation for public and private organisations using AI systems to provide victims of violations with information on the functioning of the tools used.
These recommendations will certainly inform future legislation on the use of AI, as well as in the field of criminal law; and this firstly in relation to predictive policing, an area already investigated by the report, which promptly highlights the well-known concerns about discrimination posed by such software.
In this regard, the study discusses the quality of the data provided to the machine and relating to the history of crimes committed. Such data are frequently distorted or incomplete since they are based on the subjective perception of the officer who at the time intervened and filled in the report; therefore if the algorithm processes biased-input any of its outcomes can only reflect – or even amplify – the existing discriminatory practices (so-called criticism of garbage-in, garbage-out). We should thus follow the Agency's advice to invest in AI research in order to minimise its discriminatory potential and to use the intelligent systems themselves to analyse the available data and detect discrimination.
Moreover, the report expresses some concerns about data protection, especially with regard to software that profiles the perpetrators or victims of crimes, while the one that not only maps the places where crimes are to be committed seems to be less problematic. However, as admitted by the same report, some adequate remedies to these problems are already provided by the GDPR and, above all, by the so-called Law Enforcement Directive (2016/680/EU) which «contains key fundamental rights safeguards», consisting in the imposition on the enforcement authorities of specific obligations to inform the data subject as well as of certain prohibitions. Among the latter stands out the one relating to the adoption of decisions based solely on automated processing, which produce «adverse legal effect concerning the data subject or significantly affects him or her». Rather, for the Agency the problem concerns the level of complete knowledge of this legislation by those who use such instruments; this is a gap that needs to be promptly filled by policymakers, otherwise the valid safeguards for fundamental rights indicated in the above-mentioned Directive will be ineffective.
Furthermore, the indications of the European Agency will surely guide the discussion on the (future) use of algorithms in the decision-making phases assigned to the criminal judge, although this aspect has not been addressed in the report. The US experience shows that, even in this context, there are several concerns regarding respect for fundamental rights, and they deserve careful consideration.
The report was published on 14 December 2020 and can be consulted here.