Homo Digitalis 03 | The many questions about Indian Government’s new AI Advisory

Hello,

Welcome to the third edition of my newsletter, HOMO DIGITALIS. AI regulation has become increasingly central to public policy in the last couple of years, in part due to the availability of generative AI and the doomsday prediction that it led to, and in part due to the first comprehensive regulatory oversight mechanism emerging in the form of the EU’s AI Act. The profusion of elections this year has only brought these discussions even more front and centre, as generative AI promises to act as a cheap supply chain for deep fakes and synthetic content. Despite these obvious immediate threats, there is much sense in moving slowly and with consideration. The marketplace of ideas is, as always, not lacking in poor proposals.

In Focus: Indian Government’s strange, new AI Advisory

In a bizarre and rushed foray into AI regulation, the Indian government issued an advisory on March 1, 2024, asking platforms to seek the “explicit permission” of the Ministry of Electronics and Information Technology (MeitY) before deploying any “unreliable Artificial Intelligence model(s) /LLM/Generative AI, software(s) or algorithm(s)” for “users on the Indian Internet.” Additionally, it asks intermediaries or platforms to ensure that their systems do not permit any bias or discrimination or threaten the integrity of the electoral process and label all synthetically created media and text with unique identifiers or metadata so that it is easily identifiable. Notably, this advisory follows a recent online exchange where the Minister of State for IT, Rajeev Chandrasekhar, called Google Gemini’s response to the question, “Is Modi a fascist?” a direct violation of intermediary liability regulations and criminal law provisions.

The advisory should be seen in light of an earlier one issued in November 2023. That earlier advisory asked significant social media intermediaries– including platforms such as Facebook, Instagram, and YouTube– to take down deep fake content within 24 hours of notification from an aggrieved person. Conditional on compliance with the takedown requests, the advisory made a safe harbor available to platforms as Internet Intermediaries under [Section 79(1) of the Information Technology Act, 2000]

Both advisories appear to have been issued under Rule 3(1)(b) of Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021), which requires platforms to make reasonable efforts, including informing users to cause them not to host content that spreads misinformation or impersonates another person. The November 2023 advisory was followed by another advisory in December 2023, which asked all internet intermediaries to comply with the IT Rules, 2021, in particular provisions of Rule 3(1)(b). The new advisory on AI makes a direct reference to the December advisory.

The immediate response to the AI advisory was sharp criticism from various quarters and questions regarding its legal validity. Several startups voiced concerns that it “kills startups trying to build something in the field and only allows giant corporations who can afford additional resources for testing, and government approval.” In a tweet on March 4th, Chandrasekhar clarified that the advisory only applied to “significant platforms” and will not apply to startups. In a subsequent tweet on the same day, Chandrasekhar provided further clarification. He invoked potential liabilities that Internet intermediaries face under India’s criminal laws and the loss of safe harbor protections when unlawful content is involved, and stated that platforms deploying “lab level /undertested AI platforms onto public Internet and that cause[s] harm or enable unlawful content” could protect themselves from such liabilities by seeking prior permission from the government.

The nature of this ill-conceived and poorly drafted advisory is reminiscent of the short-lived draft of the National Encryption Policy, which was released in 2015 and withdrawn in a month’s time, with the government blaming its poor drafting on a junior officer. However, unlike then, the government’s response has been bullish rather than contrite. In fact, the second clarificatory tweet by Chandrashekhar sounded annoyed, blaming “noise and confusion being created by those who [should] know better,” all complete with a man-shrugging emoji.

Despite these two clarifications, the scope and effect of this advisory remain unclear. The first clarification tries to address questions of scope by limiting the application to ‘significant platforms.’ ‘Platform’ is not a defined term anywhere in the Information Technology Act, or the IT Rules, 2021. We can only assume that the tweet was referring to a ‘significant social media intermediary’ which is defined under the IT Rules, 2021. Does this mean that all other intermediaries other than significant social media intermediaries are excluded from the application of the Advisory? It is not entirely clear.

The next issue is about the legal validity of the advisory itself. The text of the advisory does not refer to laws or regulations from which it may be drawing its enabling power to demand prior approval from the government. Advisories, unlike notifications, have no statutory force. They are merely clarifications that inform the public about various provisions of the law that already exist. The Minister’s second clarifying tweet seems to imply that the advisory simply repeats already existing penal provisions and obligations applicable to intermediaries, and all it does operationally is to offer a way for platforms to protect themselves through prior approval. If that is the case, what legal provisions enable the government to offer additional protections to platforms supposedly in violation of the law is not clear. Also, if that is indeed the case, why are startups and other smaller companies not within the scope of ‘significant social media intermediaries’ being denied this insurance?

Finally, the advisory uses general and vague terms such as “undertested” and “unreliable” AI, which are not defined in any law or regulation, nor have any accepted meaning in scientific literature. This makes meaningful compliance with the advisory almost impossible.

The legality of the IT Rules, 2021, is also uncertain. At the time of writing this article, 17 petitions were before different courts in India, challenging the constitutional validity of the rules. Given these pending legal challenges, it would be a good idea for the government to be cautious in implementing its provisions rather than seek to extend its scope to newer domains such as AI.

A more concise version of this commentary was published by Tech Policy Press here.

Recommended Readings

If you would like to read a detailed review of the IT Rules, 2021, this note by Gurshabad Grover et al which I edited is comprehensive analysis of its constitutional validity. For those tracking AI developments in India, OECD’s tracker looks exhaustive. My former colleagues at the Centre for Internet Society, Shruti Trikanad and Torsha Sarkar, along with Anoushka Soni have done an excellent job of mapping the Indian government’s various responses to disinformation in India. These provide an excellent background to the advisories discussed in this issue.

From my non-work reading, I would highly recommend Alejandro Zambra’s anthology of essays called Not to Read. I have become a recent fan of the relatively new publisher, Fitzcarraldo Editions. Their curation has a distinct point of view. I have long been enamoured by the genre of books about reading, but reading companions do not always deliver. In Not to Read, Zambra treats his accounts of reading and writing almost as a confessional, interspersing inspiration, analysis, memories and eulogies. The book also exposed me to many new literary names from Latin America and a list of books from the region I now want to read.

That is all for this issue. Until, next time.

Previous
Previous

Homo Digitalis 04 | The landscape of Facial Recognition Technologies in India

Next
Next

Homo Digitalis 02 | What can we learn from elections in Taiwan