AI platforms require permissions, can be denied if there is misinformation or bias: Government

NEW DELHI: With general elections looming and following a controversy over unsubstantiated comments made by Google's AI platform Gemini on PM Modi, the government on Saturday said it has issued an advisory for artificial intelligence-led internet companies, declaring any unverified information as potentially false to mark error-prone.

Any AI-driven public information platform will require approval before being allowed in India, the government said, while warning that it will not hesitate to deny approval if there is a risk of misinformation or bias.

Expand



The announcement comes almost two and a half months after the government issued a warning on deepfakes following several incidents in which synthetically produced content leaked onto social media and internet channels. The latest advisory states that AI-driven platforms should not publish unlawful, misinformed or biased content that has the potential to jeopardize the integrity of the nation or the electoral process.

IT and Electronics Minister Rajeev Chandrasekhar said that AI platforms like OpenAI and Google's Gemini need to make disclosures about the nature of their responses to the government and digital citizens of India, clearly mentioning that the content is false, error-prone and unlawful may be because the model is still in the testing and testing phase.


“If you have an untested platform and believe that the platform is still in the early stages of training and therefore unreliable, you need to do three things. First you have to tell the government that I'm using it. Second, you must inform consumers that I am a platform under testing through a disclaimer. Thirdly, you must expressly communicate this to the consumer using the platform and obtain their consent to use the platform. Take Google Gemini as an example. It needs to be communicated to the government before launch that it is a somewhat flawed platform.”

Asked whether the government has the power to refuse to launch such a platform in the country if it finds it unreliable, Chandrasekhar told TOI: “We can reject it if we find that there is higher risk.” It is very clear.”

Expand

He said companies often apologized after discovering their platforms provided false and unreliable information or biased results. “That's not a defense a company can take if they make an unsafe car or if you take a medication and it has after-effects.”

The minister said the new consultations would focus on eliminating bias and discrimination on the public internet. “The recommendation states that you cannot have models that output illegal content and then claim that it is untested and unreliable. If it is unreliable and untested, tell the consumer and the government in advance.”

The advisory clearly warns against hosting unsubstantiated and unreliable content. “All intermediaries or platforms ensure that the use of artificial intelligence/LLM/generative AI models, software or algorithms on or through their computing resources does not allow their users to host, display, upload or modify them to publish unlawful content , transmit, store, update or distribute… Failure to comply with the provisions would result in criminal penalties.”

Sybil Alvarez

"Incurable gamer. Infuriatingly humble coffee specialist. Professional music advocate."

Leave a Reply

Your email address will not be published. Required fields are marked *