Continue reading on DataGuidance with:
Free Member
Limited ArticlesCreate an account to continue accessing select articles, resources, and guidance notes.
Already have an account? Log in
Taiwan: Administrative guidelines and policies for AI
In this Insight article, Kenying Tseng, Vick Chien, and Evelyn Shih of Lee and Li Attorneys-at-Law explore Taiwan's dynamic approach to artificial intelligence (AI) regulation. Despite the absence of codified AI laws, Taiwan is actively developing guidelines and policies to ensure the ethical and responsible application of AI, balancing innovation with societal values.
Taiwan, known for its high-tech advancements, is currently navigating the realm of AI. As of today, Taiwan has no codified laws or statutory regulations directly applying to AI. While legislation is underway, Taiwan's tech landscape remains vibrant, with various bodies pioneering guidelines and policies to shape AI's ethical and responsible application. The Legislative Yuan, Taiwan's legislative body, has been active since 2019, proposing multiple drafts of AI laws. Simultaneously, the Executive Yuan, the country's cabinet, is diligently working on its version of an AI law, scheduled for release in 2024, pending legislative approval. The legal framework for AI in Taiwan is thus still evolving, presenting unique challenges and chances for effectively governing this disruptive technology.
Regarding guidelines and best practices, Taiwan's National Science and Technology Council (NSTC) released the 'AI Technology R&D Guidelines' (R&D Guidelines) in September 2019. The R&D Guidelines are soft laws that underscore the recognition of AI as a transformative force impacting critical sectors like healthcare, education, and autonomous driving. The R&D Guidelines also raised the significance of stakeholders collaborating to develop AI aligned with human-centric values, sustainable developments, and diversity and inclusivity to foster a sound AI environment, the three fundamental values under the R&D Guidelines.
Human-centered values prioritize human dignity, freedom, and fundamental human rights, ensuring that AI enhances human life and well-being in a respectful and ethical manner. Sustainable development focuses on balancing economic growth, social progress, and environmental protection, pursuing a future where humanity, society, and the environment thrive together. Diversity and inclusion acknowledge diverse perspectives and backgrounds, promote cross-disciplinary dialogue and public understanding, and facilitate an inclusive AI landscape reflective of society's diversity.
Amid AI's complexity lies potential risks such as discrimination and misuse. To minimize the associated risks arising from the development of AI technology, the R&D Guidelines offer a roadmap for researchers, focusing on human-machine collaboration, ethical considerations, and technological advancements to ensure AI's positive impact. The R&D Guidelines also outline eight fundamental principles derived from the core values. These principles include:
- Common Good and Well-being: AI researchers should pursue a balance of interests and common well-being among humanity, society, and the environment. They should strive for multiculturalism, social inclusion, and environmental sustainability to ensure human mental and physical health, benefit all people, and enhance the overall environment in an AI society.
- Fairness and Non-discrimination: AI researchers should ensure that AI systems, software, algorithms, and decision-making processes are human-centric, respecting everyone's fundamental human rights and dignity equally. They should avoid biases and discrimination and establish external feedback mechanisms.
- Autonomy and Control: AI applications should assist human decision-making. AI researchers should ensure that AI systems, software, and algorithms are developed to give humans complete and adequate autonomy and control.
- Safety: AI researchers should strive for the safety of AI systems, software, and algorithmic operational environments, including robustness, cybersecurity, risk management, and monitoring. They should ensure AI systems' reasonable and benevolent use, creating a safe and reliable AI environment.
- Privacy and Data Governance: Effective data governance must be established to prevent personal data privacy violations. In AI development and application, AI researchers should ensure that the collection, processing, and use of personal data comply with relevant legal regulations to protect human dignity and fundamental rights. Appropriate management measures should be in place to safeguard the rights of data subjects within AI systems.
- Transparency and Traceability: Given the significant impact of AI-generated decisions on stakeholders, the fairness of decision-making processes must be ensured. A minimum level of information should be provided and disclosed when developing and applying AI systems, software, and algorithms, including but not limited to modules, mechanisms, components, parameters, and calculations. The attention to transparency ensures that people can understand the elements of decision-making by AI systems. AI development and application should follow traceability requirements, such as data collection, data labeling, and tracking algorithms used in decision-making.
- Explainability: During the AI development and application stages, efforts should also be made to balance the accuracy and explainability of decision-making. The decisions generated by AI technology should be understandable through text, visuals, examples, etc. These efforts ensure post-explanation, presentation, and clarification regarding AI systems, software, and algorithms for users and affected parties.
- Accountability and Communication: When developing and applying AI technology, researchers should establish accountability and communication mechanisms for AI systems, software, and algorithms to promote public welfare and protect stakeholders' interests. These mechanisms include but are not limited to, explaining decision-making procedures and results and providing feedback channels for users and affected parties.
The NSTC additionally formulated the 'Administrative and Affiliated Agencies' Guidelines for the Use of Generative AI' in 2023 (Public Sector Guidelines). These Public Sector Guidelines provide a structured approach for leveraging generative AI tools to enhance administrative efficiency while ensuring a robust ethical framework. Among others, the Public Sector Guidelines provide that the information generated by generative AI must be subject to final objective and professional judgment by the governmental personnel responsible for assessing its associated risks. It must not replace the independent thinking, creativity, and interpersonal interactions of the governmental personnel.
In addition, governmental personnel must not provide generative AI with information containing classified public affairs, personal data, or information not authorized for public disclosure by the relevant authorities. Additionally, governmental personnel must also avoid inquiring the generative AI about matters that may involve confidential or personal information. However, upon confirming system security, a closed-end deployment of a generative AI model may be used according to the particular confidentiality levels of certain documents or information. Furthermore, governmental agencies should not fully rely on the information produced by generative AI and should not use unverified content from generative AI as the sole basis for administrative actions or public decisions. They should also disclose the situation when using generative AI as a supplementary tool for their duties or enforcement actions.
It is worth noting that while the Public Sector Guidelines currently only apply to governmental agencies, the guidelines also suggest that private sectors can use the guidelines to create their own rules for using generative AI. Therefore, these key provisions of the Public Sector Guidelines regarding the agencies' responsibilities, particularly the disclosure requirement outlined in point 6 of the Public Sector Guidelines, may be incorporated into future AI laws with greater binding force.
Moreover, the Executive Yuan launched the 'Taiwan Artificial Intelligence (AI) Action Plan 2.0' (Action Plan 2.0) in June 2023, which aims to elevate the country's AI industry value beyond NT $250 billion (approx. $7,62 billion) by 2026. Building upon the success of the initial AI Action Plan from 2018, Action Plan 2.0 focuses on talent development, industry growth, and global technological influence, reinforcing Taiwan's position as a tech innovation hub. Regarding governance structures, Action Plan 2.0 stipulates that the Ministry of Digital Affairs will establish an AI evaluation center. This center, alongside draft acts for AI regulation, will provide a legislative foundation for AI usage. As AI integrates further into fields such as medicine, finance, and transportation, more laws and regulations are expected to be enacted to ensure AI technologies' ethical and effective use. Cross-agency meetings are ongoing to discuss AI-generated content and related ethical considerations in the future.
Recently, in response to the growing trend of using AI in Taiwan's financial market, the Financial Supervisory Commission (FSC) officially issued the 'Guidelines on the Use of AI in the Financial Industry' (Financial Industry Guidelines) in June 2024. These guidelines offer non-binding administrative guidance, establishing best practices for financial institutions to use AI prudently. The Financial Industry Guidelines introduce a risk-based evaluation framework to ensure responsible AI adoption. When conducting a risk-based evaluation, financial institutions should consider factors including:
- AI system usage (customer services or internal management), personal data sensitivity;
- AI system autonomy level;
- AI system complexity;
- impact on stakeholders; and
- possibilities for seeking relief.
Based on the above risk-based evaluations, financial institutions should implement appropriate risk control measures. Examples include maintaining records, establishing approval processes, and conducting audits or assessments. The guidelines also stress the importance of risk evaluation, supervision, and responsibility distribution when using third-party AI systems.
Overall, the above guidelines and policies all share the goal of steering the responsible development and use of AI in Taiwan, ensuring that technological advancements benefit society while safeguarding fundamental human rights, social interests, and environmental sustainability. In the future, it can be expected that Taiwan's regulatory developments and initiatives in AI governance and innovation will continue to showcase endeavors to foster an inclusive, ethical, and technologically advanced AI ecosystem for all stakeholders and the Taiwan government to navigate the complexities of the AI landscape while harnessing its potential impact.
Kenying Tseng Partner
[email protected]
Vick Chien Partner
[email protected]
Evelyn Shih Attorney
[email protected]
Lee and Li, Attorneys-at-Law, Taiwan