1 minute read
June 25, 2023 10:02 p.m. IST
Shraddha Goled
IBM is developing a policy to regulate its employees’ use of third-party generative AI tools such as OpenAI’s ChatGPT and Google’s Bard. The company is evaluating the segment and its accuracy because such tools are based on untrustworthy sources that cannot be used, said Gaurav Sharma, vice president at IBM India Software Labs. IBM won’t be the first company to consider regulating the use of ChatGPT. Samsung Electronics, Amazon, Apple, and global banks including Goldman Sachs, JP Morgan, and Wells Fargo have restricted ChatGPT internal use due to data security concerns.
IBM is in the process of drafting a policy that will define how third-party generative artificial intelligence (AI) tools like OpenAI’s ChatGPT and Google’s Bard are used by its employees, three senior executives at the tech giant said at its AI Innovation Day event in the year of Bengaluru on June 20th.
IBM is in the process of drafting a policy that will define how third-party generative artificial intelligence (AI) tools like OpenAI’s ChatGPT and Google’s Bard are used by its employees, three senior executives at the tech giant said at its AI Innovation Day event in the year of Bengaluru on June 20th.
Gaurav Sharma, Vice President of IBM India Software Labs, spoke about the rise of Generative AI and how such tools are being used for internal processes. He said the company is evaluating the segment and its accuracy “because these tools are based on untrusted sources that cannot be used.” He added that “there is still a guideline for using generative AI applications how ChatGPT will be worked out”.
Gaurav Sharma, Vice President of IBM India Software Labs, spoke about the rise of Generative AI and how such tools are being used for internal processes. He said the company is evaluating the segment and its accuracy “because these tools are based on untrusted sources that cannot be used.” He added that “there is still a guideline for using generative AI applications how ChatGPT will be worked out”.
Subscribe to continue reading
Vishal Chahal, director of automation at IBM India Software Labs, also endorsed the development of an internal policy on the use of such tools.
Work on the policy is still in development, but no outright bans have been imposed yet. “There was a public outreach about not including our code in ChatGPT, but we didn’t ban it,” said Shweta Shandilya, director at IBM India Software Labs (Kochi).
“With any new technology, such as the use of other generative AI tools (beyond ChatGPT), consideration of their use is an ongoing process,” an IBM spokesperson responded to a request to shape ChatGPT’s internal policy.
IBM won’t be the first company to consider regulating the use of ChatGPT. On May 2, Bloomberg reported that South Korean company Samsung Electronics decided to ban employees from using ChatGPT after suspecting that confidential internal data had been leaked. On Jan. 25, Insider reported that Amazon issued a similar internal email urging employees not to use ChatGPT due to concerns about the security of sharing sensitive internal data with OpenAI. On May 18, The Wall Street Journal reported that Apple had also taken a similar path.
Global banks Goldman Sachs, JP Morgan and Wells Fargo are also understood to have restricted internal use of ChatGPT over concerns that sensitive client and customer data could leak into OpenAI’s testing environment.
IBM’s policies are based on a report released June 20 by Singapore-based cybersecurity firm Group-IB, which said data was stolen from over 100,000 ChatGPT accounts and sold on dark web marketplaces became.
However, on June 22, OpenAI stated that the stolen data was the result of “off-the-shelf malware on devices and not an OpenAI breach.”
Jaya Kishore Reddy, co-founder and chief technology officer of Mumbai-based AI chatbot developer Yellow.ai, explained why such internal bans exist: “There is a strong possibility that generative AI tools can create misinformation.” There is an accuracy problem, and the information generated can even be misinterpreted. Additionally, the data fed into these platforms is used to train and fine-tune responses – which can result in a company’s confidential information being leaked.”
On Feb. 27, Mint reported that companies are reluctant to use tools like ChatGPT. Concerns include factors such as the hallucination of data, potentially inaccurate and misleading information, and the lack of safeguards when accessing or deleting sensitive company data.
Bern Elliot, a vice president and analyst at Gartner, said at the time, “It’s important to understand that ChatGPT is built without true corporate privacy governance, leaving all the data it collects and shares without any protection.” This would be it for organizations such as media or even pharmaceutical companies, because using GPT models in their chatbots does not offer them any privacy protection. A future version of ChatGPT, powered by Microsoft via its Azure platform, which could be offered to companies for integration, could be a safer choice in the near future.”
Since then, OpenAI has introduced better privacy controls. On April 25, the company announced in a blog post that users can turn off conversation history to permanently delete their usage data from its servers after 30 days. It was also confirmed that a “for business” version of ChatGPT is in development, which would allow businesses greater control over their data.
Yellow.ai’s Reddy added that companies are currently opting for enterprise-class application programming interfaces (APIs) from companies like OpenAI that ensure data security, or developing their own internal models.
“Incurable gamer. Infuriatingly humble coffee specialist. Professional music advocate.”