Google’s parent company, Alphabet Inc., has issued an advisory to its employees to exercise caution while interacting with artificial intelligence (AI) chatbots, including Google’s own Bard. The advice was issued following concerns over potential data leaks and unauthorized information dissemination. This includes avoiding the sharing of confidential information or using AI-generated computer code.
Despite the ongoing development and marketing of Bard, Google’s counterpart to OpenAI’s ChatGPT, the tech behemoth urges caution. This reflects Alphabet’s broader concerns over data security as chatbots continue to permeate various aspects of digital communication. Concerns extend beyond Google’s ecosystem, as the potential for data leakage has implications for the broader tech industry.
Companies such as Samsung, Amazon, Apple, and Deutsche Bank have also implemented restrictions on AI chatbot usage, in an attempt to prevent the leakage of sensitive internal data. Notably, last month, Samsung confirmed an internal data leak as a consequence of ChatGPT usage.
As AI-powered chatbots like Bard and ChatGPT become more sophisticated, they are capable of learning from and reproducing the data they process during training. This exposes them as potential avenues for data leakage, which is of particular concern when sensitive or confidential information is involved.
Google has been upfront about the limitations of its technology and maintains a policy of transparency. However, concerns have also been raised over the chatbot’s impact on privacy, leading to a delay in Bard’s launch in the European Union.