India Implements Regulatory Measures for AI Development and Deployment


In a significant move to regulate the burgeoning field of artificial intelligence (AI) within its borders, India’s Ministry of Electronics and Information Technology (MeitY) has issued an advisory mandating explicit government permission before any AI technology, currently under development, can be released to the public. This directive aims to ensure that developers not only seek approval but also label any potential unreliability in the AI’s output before deployment.

The advisory further introduces a “consent popup” feature to inform users about possible defects or errors in AI-generated content and requires the labelling of deepfakes with unique metadata or identifiers to curb misuse. Additionally, it mandates that all AI models, including large language models (LLMs), adhere to standards that prevent bias, discrimination, or any threat to the electoral process’s integrity.

This move has sparked debate within the tech community, with some industry figures criticizing the regulations as overly stringent and potentially stifling innovation. Developers are given a 15-day compliance window from the advisory’s issuance, after which they may need to demonstrate their products to government officials or undergo stress testing to gain approval.

While the advisory is currently non-binding, it sets a clear expectation from the government and indicates the direction future AI regulation in India might take. IT Minister Rajeev Chandrasekhar emphasized the government’s intention to eventually encode these guidelines into legislation, underscoring the responsibility of AI platforms for their outputs, regardless of their development stage.


What does the new advisory from India’s MeitY entail for AI developers? 

AI developers must obtain government permission before releasing any AI technology to the public, label potential unreliability in their AI’s output, and adhere to guidelines preventing bias and discrimination.

The “consent popup” is designed to inform users about potential defects or errors in AI-generated content, enhancing transparency and user awareness.

How have industry figures reacted to the new regulations? 

Some industry figures have criticized the regulations as overly restrictive, fearing they could hinder innovation and the development of AI technologies in India.

What are the implications of the advisory for the future of AI regulation in India? 

The advisory signals the government’s intent to closely regulate AI development and deployment, with plans to incorporate these guidelines into formal legislation in the future.


India’s recent advisory on AI development and deployment represents a cautious approach to managing the rapid advancement of AI technologies within the country. By requiring government permission and adherence to specific guidelines, the Indian government aims to mitigate risks associated with AI, such as bias, discrimination, and misinformation. While the move has been met with criticism from some quarters of the tech industry, it underscores the government’s commitment to ensuring that AI development aligns with broader societal values and security concerns. As the advisory evolves into formal legislation, it will be crucial for developers and policymakers alike to balance innovation with ethical considerations and public welfare.

Register New Account