Apple has rejected collaboration with Meta due to privacy concerns, as reported by Bloomberg. According to sources, Apple and Meta had brief discussions about AI-related cooperation in March, but the talks did not progress, and Apple had no plans to integrate Meta’s large language model, Llama, into the iOS system.
Prior to this news, The Wall Street Journal reported over the weekend that Apple and Meta were discussing integrating Llama with iOS 18. However, Bloomberg’s subsequent report directly stated that Apple had already dismissed such ideas.
The report indicated that Apple chose not to continue formal discussions with Meta, partly because Apple believed Meta’s privacy measures were not stringent enough. Sources revealed that Apple had been criticizing Meta’s technology for years, and integrating Meta’s Llama model into the iPhone would represent a significant shift.
However, Apple apparently considered ChatGPT—or rather, OpenAI—as a better option. During the annual WWDC developer conference on June 11, Apple officially announced a partnership with OpenAI to integrate ChatGPT into the Apple ecosystem.
At the global developer conference, Apple introduced a series of AI features collectively known as “Apple Intelligence,” including notification summaries and emoji generation. However, with regards to chatbot technology, Bloomberg stated that Apple still lags behind its competitors and is seeking external partners.
Sources also revealed that Apple is in discussions with the AI startup Anthropic, considering their chatbot as one of the options for Apple Intelligence.
But does ChatGPT still raise privacy concerns? Despite Apple’s reluctance to collaborate with Meta due to privacy concerns, the already announced partnership with ChatGPT has also raised suspicions about privacy.
For example, Elon Musk, CEO of Tesla and SpaceX, immediately expressed concerns about security vulnerabilities following Apple’s announcement of the ChatGPT partnership. Musk stated that he would prohibit the use of Apple devices to protect his company’s security.
Musk criticized Apple for not being smart enough to develop its own AI while claiming that OpenAI can ensure user safety and privacy. “After Apple hands your data over to OpenAI, they have no idea what actually happens. They are selling your interests,” Musk wrote.
Musk even threatened that not only employees but also visitors entering his company would have to surrender their Apple devices at the door, which would be stored in a Faraday Cage. A Faraday Cage is a device that can shield electromagnetic fields, preventing Apple devices from engaging in any form of wireless communication with the outside world, ensuring that internal company information remains confidential.
Source:
Bloomberg, MacRumors, Reuters