Continue reading on DataGuidance with:
Free Member
Limited ArticlesCreate an account to continue accessing select articles, resources, and guidance notes.
Already have an account? Log in
International: Navigating the AI frontier - understanding regulatory approaches in the EU, USA, and India - part two
India's commitment towards the promotion and development of artificial intelligence (AI) was recently highlighted in the Union Budget of 2024-25 that was announced by the Indian government in July 2024. The Budget allocated $65 million exclusively to the IndiaAI Mission, an ambitious $1.1. billion program that was announced earlier this year to focus on AI research and infrastructure in India. It has also widely been reported that the Ministry of Electronics and Information Technology (MeitY) is in the process of formulating a national AI policy, which is set to address a wide spectrum of issues including the infringement of intellectual property rights and the development of responsible AI. As per reports, MeitY is also analyzing the AI framework of other jurisdictions to include learnings from these frameworks in its national AI policy. Part I of this series focussed on understanding the regulatory approaches adopted by some key jurisdictions like the EU and the USA. In Part two, Raghav Muthanna, Avimukt Dar, and Himangini Mishra, from INDUSLAW, explore measures that India can adopt, and lessons it can take from such markets, in its journey in the governance of AI systems.
Lessons for India from EU and US AI laws
The EU Artificial Intelligence Act (the EU AI Act) is a centralized law that ensures that all aspects of an AI system are governed under a single legislation through a consolidated set of obligations. With the advancement of technology and evolution of time, it is likely that a centralized law will not be able to adequately govern and regulate different use-cases of AI systems in any jurisdiction as it becomes difficult to incorporate enough flexibility in one stand-alone legislation without threatening innovation and development in different industries. Especially in a country as socio-economically diverse as India, regulating all kinds of emerging use cases and their impact on the fundamental rights of Indian citizens solely through a centralized law will present several challenges. At the same time, such a centralized law may also ensure that regulatory intent is clearly established and a comprehensive regulatory framework is laid down for effective regulation of AI systems.
There are also several granular facets of the EU AI Act that can be adopted and implemented by India. For instance, the EU AI Act places obligations on high-risk AI systems providing public services to undertake a fundamental rights assessment to determine the impact on the public. In doing so, it seeks to achieve the objectives specified in the preamble of the EU AI Act, which is the development of trustworthy AI to protect fundamental rights obligations. Another use case that may threaten fundamental rights, deep fakes, is also regulated by the EU AI Act. As per the EU AI Act, the deployer of an AI system generating any form of deep fake is required to disclose if such content has been artificially generated or manipulated. By requiring certain AI systems to carry out the aforesaid assessment, and by imposing measures such as a requirement to publicly disclose a detailed summary of the content used for training general-purpose AI systems and other transparency obligations, and by implementing detection and correction actions, the EU AI Act establishes a regulatory ecosystem that fosters innovation but at the same time ensures that it's not done at the cost of any potential harm to the public.
As an all-encompassing legislation, the EU AI Act also seeks to protect the rights of citizens which are threatened due to the use and deployment of AI systems. In India, there is a heavy emphasis on ensuring that ample regulatory safeguards are put in place to protect fundamental rights enshrined under the Indian constitution. Accordingly, adopting ex-ante compliances such as conducting a Fundamental Rights Impact Assessment (FRIA) prior to placing an AI system in the market, which may pose high-risk, will go a long way in safeguarding the fundamental rights of Indian citizens. One of the significant measures adopted under the EU AI Act is the risk-based approach, which focuses on the impact of an AI system rather than regulating a particular technology. The adoption of such an approach that classifies AI systems based on the degree of risk involved will offer ample clarity upfront to developers and operators of AI systems and help distinguish the measures that need to be adopted by each set of AI system operators depending on the systems they are operating and the use cases.
The EU AI Act is not restricted to only imposing obligations and ensuring consequent enforcement of such obligations, but it also seeks to provide for regulated innovation by adopting measures such as the establishment of AI Regulatory Sandboxes. Such sandboxes provide a controlled environment facilitating the development, training, testing, and validation of innovative AI systems for a limited period prior to deploying such systems in the market or as per the sandbox plan as provided in the EU AI Act. Small to medium-sized enterprises (SMEs), including start-ups, are given priority access to the sandboxes to mitigate any issues they might encounter while launching AI systems. That being said, the AI providers participating in the sandbox process can also be held liable for any harm to third parties during the process. The Indian Government has in the recent past advocated for a pro-innovation approach to promote the development of AI in India, and while India has successfully established a FinTech regulatory sandbox framework to enable emerging fintech entities to examine the regulatory viability of their products, adopting a sandbox framework similar to the AI sandboxes as provided under the EU AI Act, specifically to govern AI innovations, could be a major step towards promoting AI in India.
In comparison to the EU, the US does not have dedicated legislation for the regulation of AI at the federal level. However, it has laid down guidelines, principles, and self-regulatory measures for the governance of AI. As specified in the first article of this series, several states and regulators in the US have framed legislation or issued advisories to impose mandatory obligations on AI systems. A self-regulatory approach ensures that, at all times, the development and usage of AI systems are in accordance with the principles necessary for the development of responsible AI such as safety, security, and transparency. The self-regulatory measures and guidelines prescribed by the US however do not provide any enforcement mechanisms or objective standards for the functioning and development of AI systems and thus, the absence of it fails to ensure accountability from private entities and allows them to choose not to comply with these non-binding measures.
AI systems are also becoming increasingly capable of operating independently and their decision-making process will become more complex. Accordingly, given that framing legislation for the regulation of cutting-edge technologies, such as AI, is a long-drawn and cumbersome process, India can for the time being adopt certain self-regulatory measures similar to those implemented in the US, which are driven by ethical principles, for governing the development and use of AI systems. Having also been able to identify the shortcomings in the existing frameworks adopted in both the EU and the US, India will be better positioned now to avoid similar mistakes made in other jurisdictions, whether it is providing for enforcement measures or accounting for new risks that are not currently covered.
Further, the non-binding principles prescribed under the AI Bill of Rights and voluntary commitments secured by the White House can be used as guidance for setting out ethical standards for framing regulations for the governance of AI systems. Additionally, the National Institute of Standards and Technology (NIST) provides a comprehensive procedure for addressing the risks of an AI system through mapping, governance, and management of the impact of risks associated with an AI system. The aforesaid measures and non-binding principles issued by NIST focus on prescribing a risk-oriented framework that can be used as guidance by India in developing its own risk management framework for the development and deployment of AI systems.
Way forward
As briefly explained above, a central law presents several challenges in the regulation of an ever-evolving technology such as AI, as it might not be able to timely identify and regulate emerging use cases of AI systems across different sectors. On the other hand, a decentralized approach provides greater flexibility and a better supervision mechanism, as sector-specific regulators are always better placed to understand the needs of the fields regulated by them and the use case of AI in those particular sectors. Further, governments and regulators may consider adopting a self-regulatory approach that can work in parallel with AI-specific regulations in order to strike a balance between regulation and innovation.
While we observe from the above analysis that currently there are diverse approaches for the regulation of AI, a common thread in the existing AI regulations/principles is the focus on identifying risks posed by different categories of AI systems, implementation of measures for mitigation of risk, following due diligence and other safeguards to assess risks, and adopting enforcement actions for holding those dealing in AI responsible for any risk accruing from the use of AI. This approach allows the regulators to ensure that obligations on any AI system are proportionate to the risk associated with such an AI system. This, in turn, enables regulators to provide protection against high-risk AI systems while promoting innovation by imposing minimal and necessary obligations on low-risk AI systems.
Recently, India hosted the Global IndiaAI Summit 2024, wherein the development of responsible AI by prescribing guidelines for 'ethical, transparent, and trustworthy AI technologies' was discussed. Additionally, in a recent statement, the Union Minister of MeitY clarified that the Indian Government intends to democratize access to AI platforms and services by adopting a digital public infrastructure (DPI) approach to ensure that a single service provider should not have a monopoly over AI-related services in the country.
That being said, MeitY officials have issued several statements recently that indicate that it may adopt a stern governance framework. However, the Secretary of MeitY clarified that the Government is set to adopt a 'balanced approach in the regulation of AI systems, ensuring that both the interests of innovation and the protection of vital interests will be considered in the future.' He further stated that India can now frame regulations by learning from AI rules in various jurisdictions. It has also been reported that the new law will be a standalone legislation, and will not prescribe any penal consequences for violations. That said, another report indicates that the national AI policy will not operate in a vacuum.
These varying statements on AI regulations make it intriguing to observe the Government's approach and raise questions on whether any proposed framework will be able to balance the promotion of AI with the protection of stakeholders effectively. Thus far it appears Indian regulators are yet to recognize the severity of issues associated with AI systems, including data processing and retention issues, purpose limitation concerns, and copyright issues, and are focused primarily on user harm and the rights of the Indian citizen. While Indian regulators have adopted a wait-and-watch approach, several private entities have recently formed a coalition to address imminent issues facing the Indian AI market. This includes established bodies such as industry leaders that have formed a coalition called 'CoRE-AI' for the development of responsible AI in India. CoRE-AI seeks to frame guidelines and contribute to the development of a robust framework for the regulation of AI in India. MeitY has also welcomed this remarkable development and has stated that it looks forward to support and inputs from the forum for the regulation of responsible AI in India. While such initiatives by private entities are a step in the right direction and all other initiatives as mentioned above indicate that the Indian Government is serious about the governance of AI, it is hoped that the Indian Government can release a comprehensive AI regulatory framework at the earliest to address the ever-growing issues that have started stemming from AI, but more importantly, at the same time, it adopts some of the key approaches to the governance of AI displayed globally by developed markets like the EU and US, and incorporates the key learnings from such governance in its AI framework.
Avimukt Dar Partner
[email protected]
Raghav Muthanna Partner
[email protected]
Himangini Mishra Associate
[email protected]
INDUSLAW, Bangalore