According to a report from The Wall Street Journal, Apple has imposed restrictions on certain employees from using ChatGPT and other external AI tools.
The document highlights Apple's apprehension regarding employees using external programs, stating that they "may disclose sensitive information." It is reported that the company has also instructed its employees not to use Microsoft's GitHub Copilot.
ChatGPT is a chatbot derived from a large-scale language model created by OpenAI with support from Microsoft. When using these models, data is sent back to the developers for continuous improvement, which could inadvertently lead to the sharing of proprietary or confidential information.
In March, OpenAI temporarily took ChatGPT offline due to an error that allowed certain users to see headlines from users' chat logs. The CEO of OpenAI, Sam Altman, had previously acknowledged the potential drawbacks of this technology. Companies like JPMorgan Chase and Verizon Communications have already banned the use of ChatGPT.
It is worth noting that Apple is also developing its own large-scale language model. Apple's AI efforts are led by John Giannandre, who was hired by Apple from Google in 2018. Additionally, Apple has acquired several AI startups.
Apple's review team has requested developers of AI tools to either increase the age restriction of their applications to 17 and above (previously 4 and above) or implement content filtering. Recently, Apple sent a notice to the app developer Blix, stating that it had delayed approval for an email app update with ChatGPT support, citing concerns over the app's new AI feature potentially displaying inappropriate content. Once Blix assures Apple that content filtering has been implemented for ChatGPT functionality, the app will receive approval.
Previously, Apple was a pioneer in the consumer AI application field with the introduction of Siri, the voice assistant, in 2011. However, in the following years, the company fell behind competitors like Amazon's Alexa.