MeitY Releases Updated Advisory for Intermediaries on AI Model Usage

March 31, 2024

MeitY Releases Updated Advisory for Intermediaries on AI Model Usage

On March 15, 2024, the Ministry of Electronics and Information Technology (MeitY) issued a revised advisory under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”) to intermediaries concerning the use of Artificial Intelligence (AI) within their platforms.

The advisory, issued in supersession of the previous directive dated March 1, 2024, outlines crucial points regarding the utilization of AI by intermediaries and platforms., Summarised below are the key provisions of the advisory.

  • 1. Prevention of Unlawful Content: Intermediaries and platforms should ensure that the AI models, software, or algorithms employed do not enable users to engage in hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any content that is unlawful under Rule 3(1)(b) of the IT Rules or violates any provision of the IT Act, 2000.
  • 2. Elimination of Bias and Protection of Electoral Integrity: These entities should ensure that their computational resources, including AI models, do not facilitate bias, discrimination, or threats to the integrity of electoral processes.
  • 3. Labeling of Under-Tested/Unreliable AI Outputs: AI technologies that are under-tested or considered unreliable must be clearly labeled to indicate the potential fallibility or unreliability of their outputs. A "consent popup" or an equivalent mechanism should be used to inform users explicitly about these potential issues.
  • 4. Informing Users of Legal Consequences: Platforms and intermediaries are required to inform users through the terms of service and user agreements about the legal ramifications of dealing with unlawful information. This includes the potential for access to be disabled, accounts to be suspended or terminated, and legal penalties under applicable laws.
  • 5. Regulation on Synthetic Content and Deepfakes: When platforms' resources enable the synthetic creation or modification of text, audio, or visual content that could serve as misinformation or deepfakes, such content must be labeled or embedded with a unique metadata or identifier. This identifier should allow for the tracing of the content's creation or modification back to the platform or the individual user, ensuring accountability and traceability.

[Sources: (a) Notification dated March 15, 2024; (b)  Notification dated March 1, 2024.]

Bar Council of India prohibits solicitation of work and advertising by advocates. By accessing the Boolean Legal website, users confirm their understanding that the purpose of the website is solely to provide information about our firm for their personal knowledge and use. It is acknowledged by the user that Boolean Legal has made no effort to advertise or solicit work through this website. If you do not agree to these terms, please exit the website immediately.