India Takes the First Shot at Regulating Artificial Intelligence



India has taken its first decisive step toward regulating artificial intelligence (AI) and curbing its misuse on the internet. In a landmark move, the Ministry of Electronics and Information Technology (MeitY) has proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021—marking the country’s most significant attempt yet to address the growing threat of deepfakes and AI-generated misinformation.

The new rules, released on Wednesday, require social media platforms to ensure users declare any AI-generated or AI-altered content uploaded on their platforms. The move comes amid rising global concerns about the rapid spread of synthetic content that can convincingly mimic real people’s appearance, voices, and mannerisms.

Under the proposed framework, social media intermediaries will bear the responsibility of ensuring that AI-generated content is clearly labelled and identifiable. Companies will need to display visible watermarks or AI labels covering more than 10% of the duration or surface area of the content.

For instance, an AI-generated video must carry a watermark for at least 10% of its total runtime, while images and graphics must have visible AI labels occupying 10% of their visual space. Failure to comply could result in social media platforms losing their safe harbour protections under the IT Act, meaning they could be held legally liable for unflagged deepfake or AI-altered content.

Users who repeatedly violate the rules may have their accounts flagged or restricted by the platform.

Union IT Minister Ashwini Vaishnaw said the amendments raise the “level of accountability for users, companies, and the government alike.” He emphasized that the growing volume of deepfake content online poses real dangers to individuals, communities, and national security.

“The enforcement of orders with social media intermediaries will now be carried out by officers at the rank of joint secretary and above in the central government, and DIG and above in police bodies,” Vaishnaw said at a press briefing.

A senior government official added that the Centre has already consulted top AI companies, who confirmed that metadata-based identification of AI-altered content is technically feasible. Accordingly, MeitY has drafted the rules to ensure that “AI content becomes part of social media platforms’ community guidelines,” mandating companies—not users—to proactively identify and report deepfakes.

The decision follows a series of high-profile deepfake incidents involving public figures. On 19 September, the Delhi High Court issued an interim order protecting film producer Karan Johar from the use of AI-generated videos impersonating him for commercial gain. A similar order was passed on 10 September for actor Aishwarya Rai Bachchan, prohibiting misuse of her likeness through AI-generated content.

Globally, concerns about synthetic media have been rising. A Gartner survey published in September found that 62% of corporate cybersecurity executives said their organizations had faced at least one AI deepfake attack—often involving voice or video impersonation of senior executives.

With India’s population of 850 million internet users and an election cycle approaching, experts say regulating AI-generated content has become a national priority.

Policy experts have welcomed the move as a proactive step in AI governance, though they caution against overreach.

“The proposed amendments are a significant step in India’s evolving approach to AI regulation,” said Dhruv Garg, founding partner at the India Governance and Policy Project (Igap). “By formally defining synthetically generated information and mandating labelling norms, the government is addressing one of the most complex challenges of the digital age—ensuring transparency and trust in online information.”

However, Garg also warned that poorly designed safeguards could unintentionally restrict legitimate artistic or satirical uses of AI-generated media. “Balancing authenticity and accountability with freedom of speech will be key to the success of this framework,” he said.

Cyber law expert N.S. Nappinai, senior counsel at the Supreme Court and founder of Cyber Saathi, echoed this sentiment. She noted that while the IT Rules already empower intermediaries to act on content takedown requests, the new amendments “amplify obligations and tighten enforcement.”

“Deepfakes have now reached a scale sufficient for the Centre to consider more robust and standalone AI laws,” she added. “Criminal laws that specifically address AI-related harms could serve as stronger deterrents than general provisions.”

The draft rules come shortly after the Parliamentary Standing Committee on Home Affairs released its 254th report, titled ‘Cyber Crime: Ramifications, Protection and Prevention’. The committee recommended a mandatory watermarking system for all digital media, including photos and videos, to track authenticity and prevent tampering.

It also suggested that MeitY set uniform technical standards for watermarking and provenance, with CERT-In (the Indian Cyber Emergency Response Team) overseeing detection and issuing alerts for manipulated content.

These policy shifts align with global efforts to strengthen AI governance. The European Union’s AI Act and China’s Deep Synthesis Regulation also require clear labelling of AI-generated content, but India’s approach stands out for its specific quantitative thresholds—the 10% watermark and duration rule—making enforcement more tangible.

Meanwhile, major tech companies have begun implementing their own safety measures. YouTube, owned by Google, has expanded its early-stage program to detect AI-generated likenesses, using internal algorithms and creator metadata to identify impersonation attempts.

However, concerns remain about Google’s new Gemini 2.5 Flash model, known internally as “Nano Banana”, which can generate highly realistic images of people—raising fears about misuse for deepfakes or misinformation campaigns.

Industry observers note that India’s draft rules could push platforms like YouTube, Meta, and X (formerly Twitter) to align their moderation systems more closely with government-mandated transparency standards.

Public feedback on the draft amendments is open until 6 November 2025, after which MeitY will finalize the provisions. Once enacted, the rules will require platforms to integrate watermarking, metadata tagging, and user-declaration systems across all content upload workflows.

Experts say this is just the beginning of India’s journey toward comprehensive AI governance. Future legislation could address AI model accountability, data security, ethical use, and AI-assisted misinformation.

As India moves to position itself as a global AI innovation hub, it is also signalling that innovation must coexist with transparency, safety, and accountability.

India’s proposed AI content labelling rules mark a pivotal moment in the evolution of digital governance. By defining synthetic media and enforcing measurable labelling norms, the government is setting a precedent that blends technological realism with public accountability.

The challenge ahead lies in implementation—ensuring that the law deters malicious actors without silencing creativity or legitimate innovation. If executed well, these rules could make India a global leader in building a responsible AI ecosystem that protects both individual rights and national interests.


Post a Comment

Previous Post Next Post

By: vijAI Robotics Desk