Continue reading on DataGuidance with:
Free Member
Limited ArticlesCreate an account to continue accessing select articles, resources, and guidance notes.
Already have an account? Log in
Turkey: The evolving approach to AI governance
Artificial intelligence (AI) is rapidly transforming various sectors globally, and Turkey is no exception. As the adoption of AI technologies accelerates, governments worldwide are addressing the need for comprehensive regulations to ensure ethical and responsible AI development and deployment. Yücel Hamzaoğlu and Melike Hamzaoğlu, from Hamzaoğlu Hamzaoğlu Kınıkoğlu Attorney Partnership, delve into the current state of AI regulation in Turkey, examining guidelines, existing legislation, and the influence of the EU AI Act.
The background of the evolving legal framework
As of the time of writing, Turkey lacks dedicated legislation or regulations exclusively focused on AI. Nevertheless, with increasing interest in AI systems and their implications, new bodies were formed, and the existing legal framework has started to evolve.
After establishing the Department of Big Data and AI Applications within the Presidency's Office of Digital Transformation, one of the most notable steps to establish regulations governing interactions with AI was formulating a national strategy in this regard. In response to rapid growth in this area, the National AI Strategy (2021-2025) was introduced with the cooperation of the Presidency's Office of Digital Transformation and the Ministry of Industry and Technology to delineate foundational principles to guide legislative endeavors. The Strategy not only focuses on crucial objectives for fostering a robust AI system but also emphasizes the imperative of maintaining data protection, security, and ethical standards within AI systems. Therefore, while no specific law is in place, stakeholders, including governmental bodies and the private sector, are likely engaged in discussions to shape a legal framework for AI in Turkey.
Furthermore, it is also important to note here that one of the goals stated in Turkey's Medium-Term Programme (2024-2026) is ensuring compliance with the EU legislation, including the alignment of the Law on Protection of Personal Data (the Law), a cornerstone of privacy regulations in Turkey, with the General Data Protection Regulation (GDPR). Following the publication of these objectives in September 2023, significant amendments were made to the Law, particularly concerning the processing of special categories of personal data and cross-border data transfers, in March 2024. Consequently, it is also anticipated that specific AI regulations will be introduced and/or amendments will be made to existing legislation to address AI within the Turkish jurisdiction.
Data protection and the AI guideline
Given that AI systems rely heavily on processing data, the Law holds considerable relevance in determining the rules applicable to such systems. Although it doesn't explicitly focus on AI, its provisions encompass the protection of individuals' privacy and personal data, which is integral to AI applications.
Additionally, the Personal Data Protection Authority (KVKK) published a guideline called Recommendations on Personal Data Protection in the Field of Artificial Intelligence. This AI guideline is tailored for developers, manufacturers, service providers, and decision makers operating in the field of AI to offer guidance on safeguarding personal data in accordance with the Law. As per the AI guideline, all efforts should be made to follow personal data laws right from the start of the projects within the line of a data protection compliance plan tailored specifically for each project. Therefore, compliance with the Law and its secondary regulations has become an integral part of developing and implementing AI technologies in Turkey.
The AI guideline released by the KVKK encompasses a structured framework with three key sections, each addressing vital aspects of AI implementation and regulation.
The first section outlines general recommendations, emphasizing the significance of data protection principles such as data minimization, security, transparency, and accountability, particularly in studies involving AI and personal data processing. It highlights the necessity of conducting a Privacy Impact Assessment for AI systems posing high risks to personal data protection, advocates for the application of anonymization techniques, and underscores the importance of determining the roles of parties within the AI ecosystem as data controllers or processors at the outset of AI technology development.
The second section provides recommendations tailored for developers, manufacturers, and service providers. During the design phase of an AI system, a privacy-centric approach aligned with national and international regulations should be adopted. It is imperative to prevent discrimination and biases at every stage of data processing, including collection, by prioritizing fundamental rights and freedoms. Assessing data quality, quantity, category, and content is essential to minimize data usage, with ongoing monitoring of AI model accuracy. The input of academic institutions is crucial in developing AI practices aligned with human rights and social values, as well as in identifying potential biases. Individuals should have the right to object to technologies affecting their personal development, and risk assessments involving active individual participation should be promoted. Products should not expose data subjects to automated decisions without their input, and algorithms ensuring legal accountability should be prioritized. Users should be empowered to suspend data processing activities and request deletion, destruction, or anonymization of their data. Transparent notification of data processing grounds, methods, and potential consequences is essential, along with establishing consent mechanisms where necessary.
The final section of the AI guideline is designated for decision-makers. In essence, it stresses the importance of accountability, necessitating defined risk procedures for personal data protection and the creation of an implementation matrix. Furthermore, it advocates for the establishment of codes of conduct, certification mechanisms, and comparable measures. Clarity on the role of human intervention in AI decision-making processes is essential to enable users to challenge AI recommendations. It also emphasizes the allocation of adequate resources for AI studies, the provision of personal data protection training, and the promotion of active individual participation in these processes.
The AI guideline stresses the need for a thorough framework to guide the responsible development and deployment of AI technologies. This involves prioritizing data protection principles, considering ethical implications, preventing biases, and empowering individuals to question AI recommendations. Decision-makers play a crucial role in ensuring accountability, implementing risk procedures, and allocating resources for AI studies. In essence, Turkey aims to nurture a responsible and ethical AI ecosystem that prioritizes individual rights and values, while also driving innovation – the same goal that led to the introduction of the EU AI Act for Europe.
The EU AI Act and its impact on Turkey
The EU's AI Act, proposed in April 2021 and approved by the European Parliament in March 2024, aims to create a harmonized legal framework for AI across Member States. While Turkey is not an EU Member, the country's close economic ties and collaborative initiatives with the EU raise questions about the potential impact of the EU AI Act. In response to this, Turkey may align its legal framework with EU standards to facilitate international cooperation and ensure compatibility with European partners.
Moreover, while aligning the existing framework with the EU AI Act is one aspect, being mandated to comply with it is another. The EU's AI Act will impose obligations on providers, deployers, importers, distributors, and manufacturers of AI systems connected to the EU market. Thus, like the GDPR, the territorial scope of the EU AI Act is extensive. As a result, it will apply not only to providers placing AI systems or general-purpose AI models on the EU market or putting them into service, but also to providers and deployers of AI systems in third countries if the output of the AI system is used within the EU. Considering Turkey's increasing interest in forming an environment for developing and marketing AI applications both domestically and internationally, numerous public and private incentives are expected to likely fall within the territorial scope of the AI Act.
Furthermore, given that the GDPR has significantly influenced the data protection landscape of the country, it is expected that the EU AI Act will similarly impact and shape it. Hence, it is strongly advisable for companies operating in Turkey to consider implementing an additional compliance plan when embarking on AI projects, aligning with the AI guideline of the KVKK. Additionally, they should consider the obligations enforced by the EU AI Act, as it is likely to establish global standards for the AI market.
Conclusion
Turkey stands at the crossroads of AI regulation, balancing the need for innovation with ethical considerations and privacy concerns. While existing legislation provides a foundation, the formulation of specific AI laws and potential alignment with the EU AI Act signals a proactive approach. As the global AI landscape evolves, Turkey's commitment to responsible AI governance will play a pivotal role in shaping its digital future.
Yücel Hamzaoğlu Partner
[email protected]
Melike Hamzaoğlu Partner
[email protected]
Hamzaoğlu Hamzaoğlu Kınıkoğlu Attorney Partnership, Istanbul