Regulation (EU) 2024/1689 (AI Act), which aims to promote the spread of anthropocentric and trustworthy AI, addresses a high number of issues, of particular relevance are those that intertwine the issues of personal data protection and training of AI models.
In the new set-up of the AI Act, for high-risk systems is envisaged that training, validation and testing datasets need to undergo specific governance and management practices.
Such practices, for instance, concern data collection processes and the assessment of bias that could affect the health and safety of persons, have a negative impact on fundamental rights or lead to prohibited discrimination, especially where system outputs affect inputs for future operations.
Furthermore, processing of special categories of data (“sensitive data”) is permitted only for the purpose of detecting and correcting these bias, as long as synthetic or anonymised data cannot be used and, in any case, where adequate safeguards are in place for the persons involved, including technical limitations on data reuse, more advanced security measures and prohibitions on transmission or making available of data to third parties.
When using these systems to make decisions with may have significant effects on an individual, the European choice was to strengthen the GDPR data subject protections for automated decision-making: the data subject will have the right to obtain clear and meaningful explanations from the user of the system about the role of the AI in the decision-making process and the main elements of the decision taken (xAI).