Continue reading on DataGuidance with:
Free Member
Limited ArticlesCreate an account to continue accessing select articles, resources, and guidance notes.
Already have an account? Log in
Argentina: AAIP publishes guide on transparency and personal data protection in AI
On September 16, 2024, the Argentinian data protection authority (AAIP) published a guide for public and private entities on matters of transparency and protection of personal data for responsible artificial intelligence.
The guide focuses on the implications of technologies based on automated decision-making systems and, in particular, those that incorporate artificial intelligence (AI), in relation to their consequences on fundamental rights and how to address these challenges from a regulatory and institutional perspective.
What is the aim of the guide?
The guide aims to support actors from both the public and private sectors in:
- incorporating transparency and protection of personal data in technological development projects that implement AI systems;
- preventing risks from early stages and promoting the integration of responsible AI throughout the system's lifecycle; and
- guaranteeing the effective exercise of citizens' rights.
What is the scope of the guide?
The guide applies to public and private actors such as:
- organizations that provide and develop solutions that integrate AI systems;
- governments in the implementation of policies to promote the use of AI;
- organizations that implement AI in their processes and/or products;
- academic institutions that research the impact of AI systems; and
- social organizations that ensure the protection of citizens' rights in the face of the accelerated advance of AI.
What are the defined terms and characteristics of AI systems?
The guide references definitions of AI according to the United Nations Educational, Scientific and Cultural Organization (UNESCO), the EU, the Sadosky Foundation, and the Organization for Economic Co-operation and Development (OECD), but it categorizes AI as an Automated Decision System (ADS) which are systems that can deal with decisions based on a predefined set of rules or algorithms. However, it adds that an AI system is more complex because it has the ability to recognize patterns, which allows it to understand natural language, make decisions, and perform adaptive learning.
What are the key characteristics of AI systems?
The guide identifies three elements of an AI system, including:
- sensors: collects raw data from the environment;
- actuators: act to change the state of the environment (processing) based on the output of the model; and
- operational logic provides output to the actuators in the form of recommendations, predictions, or decisions that can influence the state of the environment. This is divided into three parts: the algorithmic model itself, model building (automated or not), and model inference, which is the process by which humans and/or automated tools derive an output from the model.
What are the main challenges of AI?
The guide identifies several problems and threats of AI systems that are linked to the transparency and protection of data, including:
- biases and discrimination: AI systems can perpetuate or even amplify bias and discrimination, such as perception bias, technical bias, modeling bias, and activation bias. In the processing of big data, ethnic and gender biases/stereotypes can arise, thus AI systems depending on such data, such as facial recognition systems, may have a higher error rate;
- lack of data quality: AI systems may contain data that is not of quality or may contain errors or false information, given that in many cases, the data used is obtained from outdated sources that put in question the legitimacy of the data;
- risk of privacy breach: AI requires large amounts of personal data to train models and perform inferences which increases the risk of privacy breaches;
- security risks: AI systems may be vulnerable to cyberattacks that can compromise their functioning and could result in unauthorized disclosure of personal data, manipulation of model results, or compromise the integrity of the system;
- improper use of personal data: personal data collected for a specific purpose may be used for other purposes, violating data protection principles;
- identity fraud: deepfakes that, through the use of AI techniques, can substitute identity which can affect the privacy of people and deceive third parties to obtain sensitive information or access to systems;
- surveillance without consent: AI systems increase the possibilities of monitoring and analyzing people's everyday habits, i.e., mass surveillance;
- lack of transparency: due to their complex and opaque nature, AI models make it difficult to understand how decisions are made and difficult to explain how each variable influences the final decision, which can undermine citizens' trust in the process and increase the risk of misuse or abuse of the technology; and
- neurodata: AI systems that can predict or infer information about people from the nervous system activities increase the risk of collecting and using neurodata without the explicit and informed consent of individuals, which can result in an invasion of privacy.
What are the principles relating to personal data protection and transparency?
The guide underscores the importance of data for AI systems to be collected, used, shared, archived, and deleted in a manner consistent with relevant laws guaranteeing certain rights, such as the right to access, rectify, update, and delete data, object to processing, and revoke consent, and following the principles of personal data protection such as lawfulness, consent, purpose, data quality, security, confidentiality, and minimization.
The guide also clarifies the transparency and explainability of AI systems, first as a process that involves a set of strategies, practices, instruments, and procedures that organizations must carry out and secondly, as a result, whereby organizations are capable of making visible the objectives they pursue, the management of the resources they handle, the actions they carry out, and the results they achieve.
What are the recommendations from the guide?
The guide provides various recommendations for transparency and protection of personal data in the lifecycle of AI systems, as divided into four parts:
- system design - actors should, among other things:
- assess the impact of the project on the protection of personal data;
- consider Privacy by Design and by Default;
- ensure the source of data is lawful; and
- apply techniques of minimization, anonymization, encryption, access control, conservation, destruction, etc.;
- verification and validation - actors should:
- publish a system-wide ethical commitment document;
- evaluate whether the algorithms are aligned with established values, principles, and guidelines;
- calibrate the relevance and accuracy of the data; and
- identify the biases;
- implementation - actors should:
- evaluate the user experience to ensure the highest level of accessibility and usability standards;
- secure information through the protection of personal data, transparency, traceability, and audibility;
- document test results and steps to be taken; and
- publish a privacy policy; and
- operation and maintenance - actors should:
- update and inform citizens about the system versioning;
- document system performance, impact assessments, and corrective actions;
- report any ethical and/or safety incidents to the competent authority;
- appoint a data protection officer (DPO); and
- enable communication channels for queries and complaints, internal procedures, and responses in language that is understandable to people who are not experts in AI.
You can read the guide, only available in Spanish, here.