Support Centre

Asia-Pacific

Insights

Thailand has made significant strides in establishing a robust framework for data privacy and data protection. The Personal Data Protection Act 2019 (PDPA) came into full effect as of June 1, 2022, following two enforcement suspensions. The PDPA is Thailand's first comprehensive law covering private sector entities. Before the PDPA, data protection regulations were limited to specific sectors in Thailand, such as government agencies handling personal data and telecommunications, as well as the National Credit Bureau. However, the PDPA now mandates continued compliance with both their existing applicable regulations and the PDPA for these sectors.

Since the PDPA came into full effect, various sub-legislations have been gradually issued by Thailand's regulatory body under the PDPA, namely the Personal Data Protection Committee (PDPC). This marks a critical development in the country's legal landscape. However, although the PDPA aims to protect individuals' personal data and, at the same time, balance the needs of businesses, certain areas remain unaddressed by sub-legislations and the lack of official guidelines, making it challenging for businesses to fully comply with the PDPA.

Furthermore, as globalization reshapes the business landscape, multinational organizations must navigate a complex web of legal and regulatory frameworks, especially as data protection and privacy continue to evolve in Thailand. Standards under the PDPA that govern vendor relationships are one of the key considerations businesses should carefully review, given the increasing reliance on third-party vendors for data processing. In this Insight article, Chanakarn Boonyasith, Pitchabsorn Whangruammit, and Pattaranun Hanwongpaiboon, from Nishimura & Asahi, provide an overview of vendor privacy contracts in Thailand, highlight key legal requirements, and outline important considerations for multinational organizations operating in Thailand.

The Philippines is preparing for a quantum leap in artificial intelligence (AI) adoption with the launch of the National AI Strategy Roadmap 2.0 (NAISR 2.0). Announced publicly on July 3, 2024, by the Department of Trade and Industry (DTI), and supported by the Asian Development Bank and a legislation mandate from the Tatak Pinoy Act (Republic Act No. 11981), this updated roadmap aims to position the country as a regional AI powerhouse fostering innovation and sustainable economic growth. For businesses and stakeholders keen on staying ahead of the curve, understanding the nuances between the 2021 NAISR (NAISR 1.0) and NAISR 2.0 is key to understanding the upcoming legislative and regulatory landscape in the Philippines.

In this Insight article, Edsel F. Tupaz and Danica Anne S. Escobiñas, from Gorriceta Africa Cauton & Saaavedra, take a deep dive into the NAISR 2.0, its interplay with other privacy laws, and the impact it may have on development in the Philippines.

Data scarcity, or more specifically, the lack of usable and high-quality data (that is, data that has been cleaned, processed, is of high quality in accuracy and relevance, and formatted, in each case to a widely and commonly accepted standard) is an endemic problem in relation to various industries and the healthcare and pharmaceutical industries are no exception. An article published by the World Economic Forum (WEF) identified that although the healthcare sector generates a 'staggering amount of data,' a whopping '97% of it goes unused.'

In the ideal world, we would see the availability of centralized digital repositories with complete, up-to-date, fully usable health information available to the community in an accountable and secure framework.

The lack of such a framework can lead to undesirable outcomes. One US commentator argued for a national health information exchange to deal with this issue, one that formally establishes required standards for creating, maintaining, and sharing these records, and one that creates the infrastructure and capability to gather, integrate, store, and use health records, arguing that the COVID-19 pandemic threw a spotlight on the dangers arising from the lack of such access.

Other commentators argue that data scarcity or 'health data poverty' from certain communities could result in unrepresentative datasets that produce poor quality and ineffective artificial intelligence (AI) developed and machine-learning medical solutions - essentially creating a digital divide where some of the best advances in medical innovation leave out (and leave behind) such communities. Indeed, persistent data scarcity in medical machine learning has led some authors to propose solutions such as reliance on synthetic data, but even such solutions need to ensure realism and avoid harmful biases.

The article by the WEF identified three themes of concern to address this: 'the sheer volume of data, data privacy concerns, and the need for interoperability.'

In this Insight article, Jeffrey Lim and Frederick Tay, from Joyce A. Tan & Partners LLC, look at this issue from a data privacy perspective and discuss ongoing work on the Health Information Bill to develop a potential national health data repository in Singapore. This Bill has the potential to provide the framework for the delivery of a high-quality, usable healthcare data exchange - to ask how one might see such legislation establish more than a national repository and database to service patients, but also build a sound, secure, and accountable framework for conducting data collection, use, and processing in the name of innovation, improving healthcare, pharmaceutical solutions, and the medical outcomes of therapies and treatments.

The National Privacy Commission (NPC) issued NPC Circular No. 2024-02, entitled Closed-Circuit Television (CCTV) Systems (the Circular), on August 9, 2024. It notes the previously issued NPC Advisory No. 2020-04, entitled Guidelines on the Use of Closed-Circuit Television (CCTV) Systems, and recognizes the need to provide an updated policy in relation to the use of CCTV systems due to the continuously evolving nature of technology concerning CCTV systems. Thus, the NPC has provided guidelines to assist all personal information controllers (PICs) and personal information processors (PIPs) in navigating the emerging privacy risks arising from the use of CCTV systems.

In this Insight article, Edsel F. Tupaz and Luis Teodoro B. Pascua, from Gorriceta Africa Cauton & Saavedra, highlight salient changes in the NPC's policy since its NPC Advisory No. 2020-04 and discuss the practical implications arising from notable changes.

Along with the growth and increasing prevalence of artificial intelligence (AI), including generative AI, the privacy and ethical risks brought along by the new technology cannot be understated. Ada Chung Lai-Ling, Privacy Commissioner for Personal Data (PCPD), Hong Kong, China, looks at what steps organizations can take to ensure compliance and the guidance offered by the PCPD to help this.

On September 5, 2024, the Department of Industry, Science and Resources (DISR) published a paper on Safe and responsible AI in Australia: Proposals paper for introducing mandatory guardrails for AI in high-risk settings (the Proposed Guardrails). The DSIR also announced a public consultation on the paper to receive feedback which will be used to guide Australia's approach to artificial intelligence (AI) regulation.

OneTrust DataGuidance provides an overview of the proposed guardrails and how they compare to other emerging AI regulations, with expert comments provided by Alec Christie, from Clyde & Co LLP.

In this Insight article, Vick Chien, Ken-Ying Tseng, and Evelyn Shih, from Lee and Li, Attorneys-at-Law, introduce Taiwan's progressive steps towards artificial intelligence (AI) governance. With the 2024 Draft Artificial Intelligence Basic Act (the Draft Act), Taiwan addresses legal challenges while promoting sustainable development, transparency, and innovation in AI technologies.

Further to the Provisions on Promoting and Regulating Cross-border Data Transmission promulgated by the Cyberspace Administration of China on March 22, 2024 (the CBDT Provisions) which substantially reduce the procedural burden associated with transmitting data out of China, Beijing authorities recently issued the Administrative Measures for the Negative List for Data Export from China (Beijing) Pilot Free Trade Zone (for trial implementation) including a detailed negative list (the Beijing Negative List) on August 30, 2024. The Beijing Negative List aims to further liberalize the transmission of data out of the Beijing pilot free trade zone (FTZ) and provide more clarity on the identification of important data. On top of benefiting existing companies in the Beijing FTZ, the business-friendly tone of the Beijing Negative List seems to be a good reason for companies to revisit their current data export set-up and make the best use of the benefits offered under these new rules. Are these rules indeed creating a substantial difference and a game changer?

Dr. Michael Tan and Julian Sun, from Taylor Wessing, take a deep dive into the new measures and FTZ, comparing them with other FTZs in China.

In this Insight article, Kritiyanee Buranatrevedhya and Phatrajarin Tanjaturon, from Baker & McKenzie Limited Attorneys at Law, introduce Thailand's National Science and Technology Development Agency (NSTDA) Ethical Guidelines for artificial intelligence (the Guideline), announced March 2022, a framework designed to promote responsible and ethical artificial intelligence (AI) practices.

The Personal Data Protection Act B.E. 2563 (A.D. 2019) of Thailand (PDPA), effective from June 1, 2022, is the key legislation of Thailand that provides comprehensive protection for personal data. Local and foreign entities that collect, use, or disclose personal data of data subjects in Thailand are subject to the PDPA. Cross-border data transfers are subject to stringent requirements under the provisions of the PDPA and the applicable rules issued under the PDPA. Multinational corporations (MNCs) are required to have in place adequate data protection measures for the purpose of their cross-border data transfer activities.

Kowit Somwaiya and Usa Ua-areetham, from LawPlus Ltd., provide an overview of the key considerations for MNCs to consider when implementing cross-border data transfer mechanisms. The overview is focused on the key requirements for the Binding Corporate Rules (BCRs) and the Data Transfer Agreement (DTA) as set out in relevant notifications issued by the Personal Data Protection Committee (PDPC) under the PDPA, such as the implementing rules on the criteria for protecting personal data sent or transferred abroad according to Section 28 of the PDPA (PDPC rules).

A year in law and regulation of artificial intelligence (AI) is an eon. Whilst a robust regulatory framework or approach should be consistent, grounded in principles, and predictable, it must also evolve to address changing attitudes and perceptions. And whilst no one wants to kill the goose that lays the golden egg by diving headfirst into innovation-curbing regulation, the shift towards favoring some kind of regulation is noticeably gaining speed, with growing fears of social, economic, and political ills.

Now, with the EU successfully rolling out its extensive AI regulatory framework in the EU Artificial Intelligence Act (the EU AI Act), one question is whether other countries will follow suit to enact their own compatible or comparable 'hard law' in a classic demonstration of the 'Brussels effect.'

In his Insight article last year, Jeffrey Lim, Director at Joyce A. Tan & Partners LLC, described how Singapore has been a prime example of a jurisdiction that has positioned itself as a paragon of a regulation-lite, community-fostered, and inclusive multi-stakeholder jurisdiction. With its voluntary Model Governance Framework now in its second version and encompassing generative AI (genAI Model Framework), in this Insight article, Jeffrey explains how Singapore appears set to continue in this vein - an approach that has already yielded significant private sector in-country investment in AI capability building.

India's commitment towards the promotion and development of artificial intelligence (AI) was recently highlighted in the Union Budget of 2024-25 that was announced by the Indian government in July 2024. The Budget allocated $65 million exclusively to the IndiaAI Mission, an ambitious $1.1. billion program that was announced earlier this year to focus on AI research and infrastructure in India. It has also widely been reported that the Ministry of Electronics and Information Technology (MeitY) is in the process of formulating a national AI policy, which is set to address a wide spectrum of issues including the infringement of intellectual property rights and the development of responsible AI. As per reports, MeitY is also analyzing the AI framework of other jurisdictions to include learnings from these frameworks in its national AI policy. Part I of this series focussed on understanding the regulatory approaches adopted by some key jurisdictions like the EU and the USA. In Part two, Raghav Muthanna, Avimukt Dar, and Himangini Mishra, from INDUSLAW, explore measures that India can adopt, and lessons it can take from such markets, in its journey in the governance of AI systems.