Continue reading on DataGuidance with:
Free Member
Limited ArticlesCreate an account to continue accessing select articles, resources, and guidance notes.
Already have an account? Log in
South Africa: AI and data privacy regulations - the complexities of AI technologies and processing personal information
There have been radical developments in various artificial intelligence (AI) models, with ChatGPT being the most prominent. ChatGPT serves as a language-based AI chatbot that uses a set of techniques referred to as deep learning that has continuous learning capabilities. As a result of these revolutionary AI developments, businesses have acknowledged the valuable insights that AI platforms can provide. It facilitates the generation of contracts, marketing content, CVs, articles, essays, and much more. It does so by gathering and processing data sourced from the internet, encompassing large sets of data derived from books, articles, and other online resources. PR de Wet and Jako Fourie, from VDT Attorneys Inc., examine the impact of POPIA on AI developments, with a specific focus on the processing of data by automated means through AI.
Despite the numerous advantages of AI, its rapid development is surpassing, not only South Africa's existing regulatory framework, resources, and capabilities, but also globally.
This gives rise to two main issues: firstly, the concerns surrounding data privacy as AI platforms gather and process vast amounts of user data; and secondly, the challenge of ensuring data compliance with data protection regulations such as South Africa's Protection of Personal Information Act (POPIA) or similar EU regulatory measures such as the General Data Protection Regulation (GDPR).
The relevance of POPIA to AI, a South African view
In South Africa at the moment, there is no comprehensive legislation that governs the use of AI and machine learning in the country.
The regulation of personal information on the other hand is governed by POPIA. POPIA derives guidance from Section 14 of the Constitution of the Republic of South Africa, which grants the right to privacy to all natural persons. Enacted in November 2013, POPIA's primary objective is to protect and uphold the right to privacy for all natural persons. It strikes a balance between safeguarding personal data and facilitating the seamless transfer of information within South Africa and beyond international borders. This empowers individuals with more control over their personal information and places responsibilities on responsible parties and operators, such as AI developers, to adhere to POPIA's regulations.
Section 1 of POPIA defines 'processing' as any operation or activity or any set of operations, whether or not by automatic means, concerning personal information, including regulating the collection, storage, processing, and sharing of data by responsible parties. The very essence of AI technologies greatly, if not wholly, depend on the processing of vast databases for its own training, enhancements, and improvement, and as a result, the gathering and processing of personal information might form part of such a process.
POPIA defines 'personal information' as any information that relates to an identifiable, living natural person or juristic person (a legal entity). POPIA effectively extends and codifies the constitutional right to privacy, now also to juristic entities. It includes any information that can be used to identify the person directly or indirectly. This includes but is not limited to, names, identity numbers, contact details, location data, online identifiers, and other specific attributes that can be linked to an individual or entity.
POPIA, however, does not prohibit the processing of personal information that has been 'de-identified' or 'anonymized.' To satisfy the requirements of 'de-identification’ or ‘anonymization’ of personal information under POPIA, the data must undergo a process that removes any identifying details relating to the data subject. Furthermore, any information that could, by using a 'reasonably foreseeable method,' be linked or combined with other data to identify the individual, must also be eliminated. In simpler terms, if the supposedly anonymized personal information can still be used to identify an individual (or a juristic person), using a foreseeable method, the data would not be considered to be de-identified or anonymized, and the same will still fall under the purview of POPIA.
Determining the reasonably foreseeable method test when considering AI developments may prove to be a very difficult or even impossible task, especially when one has regard to the extreme pace at which this specific field of technology is growing. Regarding to the fact that the world is currently experiencing an unprecedented era of growth in the field of technology, especially in the field of AI, with adapting capabilities previously only seen in movies, such technologies have now found their way into our everyday lives and have infiltrated almost every sector imaginable.
Therefore, with such great uncertainty as to where all this will end, it is not impossible to imagine a scenario where AI could potentially acquire the means and/or capability to process personal information in a way or manner that it was never intended to do in the first place, for example, acquiring the capability to re-identify data that has previously been de-identified.
This, therefore, ultimately raises the question: What is deemed as an acceptable 'reasonably foreseeable method?' Should (or could) such a method even exist in such a fast-evolving space as such may become obsolete in the immediate future? It is for this exact reason why regulatory frameworks such as those defined in POPIA must be designed with technologies like AI in mind, and why they must be ever-evolving to accommodate and stay abreast with technological enhancements of the times. Addressing these challenges requires constant cooperation among various stakeholders such as regulators, industry experts and activists, specialized legal professionals in the relevant field, subject matter experts, and needless to say, technological advancements such as AI to assist.
The Information Regulator, responsible for data protection in South Africa, is in ongoing discussions regarding the regulation of AI development to safeguard the personal information of data subjects. Advocate Pansy Tlakula, chairperson of the Information Regulator, expressed her concern about the use of AI platforms and emphasized the importance of understanding the technical complexities of these systems. The Regulator mentioned that given that the use of AI software platforms in South Africa is relatively new and unexplored, it is imperative that proper research is conducted to gather sufficient technical information and insights before establishing guidelines on AI development. The purpose of the study will be to gain a firm understanding of the complexities and implications of AI technology, facilitating the implementation of effective guidelines seeking to address specific requirements and challenges posed by AI systems in the country. However, whilst this is underway, great uncertainty still prevails over the efficacy of current regulations to safeguard the processing of personal information on AI platforms and to combat the associated risks of the use of AI.
Section 71(1) of POPIA, however, provides some form of protection. This section contains a general prohibition against the processing of personal information by automated means and determines that a data subject may not be subject to a decision that results in legal consequences for them or which affects them to a substantial degree, which is based solely on the basis of the automated processing of personal information intended to provide a profile of such person including their performance at work, or their creditworthiness, reliability, location, health, personal preferences, or conduct (emphasis added).
Section 71(2) of POPIA is the exception to the general rule and provides that such prohibition does not apply if the decision:
- has been taken in connection with the conclusion or execution of a contract; and
- the request of the data subject in terms of the contract has been met; or
- appropriate measures have been taken to protect the data subject's legitimate interests; or
- is governed by a law or code of conduct in which appropriate measures are specified for protecting the legitimate interests of data subjects.
The appropriate measures, referred to in subsection (2)(a)(ii) of POPIA, must:
- provide an opportunity for a data subject to make representations about a decision referred to in subsection (1); and
- require a responsible party to provide a data subject with sufficient information about the underlying logic of the automated processing of the information relating to him or her to enable him or her to make representations in terms of paragraph (a).
It is, however, with great anticipation that we wait to see if the Information Regulator will release a code of conduct as contemplated by Section 71(2)(b) above, to specifically address the use of AI technology when it comes to the processing of personal information and perhaps provide some clarity and guidance on a clear interpretation of Section 71(1). Especially what exactly is meant by all the bolded portions below:
'in legal consequences for him, her or it, or which affects him, her or it to a substantial degree, which is based solely on the basis of the automated processing of personal information intended to provide a profile of such person including his or her performance at work, or his, her or its creditworthiness, reliability, location, health, personal preferences or conduct.'
Despite the lack of certainty, the use of AI technology for the processing of personal information would in our view fall squarely under the purview of Section 71 of POPIA. On the same token, such processing would be subject to the eight conditions for the lawful processing of such personal information1. In this regard, a responsible party must secure the integrity and confidentiality of personal information in its possession or under its control by taking appropriate, reasonable technical and organizational measures to prevent loss of, damage to, or unauthorized destruction of personal information and unlawful access to or processing of personal information as contemplated by Section 19 of POPIA.
Closing remarks
What should perhaps be of great concern to anyone who seeks to adopt AI technologies is the recent open letter by 100+ notable signatories calling on 'all AI labs to immediately pause for at least [six] months the training of AI systems more powerful than GPT-4'2. They raised valid concerns about the development of ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
The relationship between POPIA and AI, particularly those relying on the processing of personal information, poses significant risks for responsible parties making use of such technology, without fully grasping its capabilities and/or limitations (if any). The ever-evolving interplay between data privacy and AI technology remains uncertain, and it remains to be seen how these challenges will develop in the future. It is unclear whether they will be mutually exclusive or find a way to coexist, one thing is however for certain, the regulators and legislators have a daunting task ahead of them.
PR de Wet Partner
[email protected]
Jako Fourie Associate
[email protected]
VDT Attorneys Inc., Pretoria
1. See: https://www.dataguidance.com/opinion/south-africa-processing-childrens-personal
2. See: https://futureoflife.org/open-letter/pause-giant-ai-experiments/