Apple’s painfully slow launch of the new Siri has become such a drawn-out saga that it seems to have all the makings of a movie – even though we probably won’t see the launch on Apple TV.
The announcement earlier this year that it would be powered by Google’s Gemini models finally gave us reason to be optimistic, and the latest Apple AI news could be an equally important development…
Even the most supportive Apple commenters have reached the end of their patience after Apple announced new Siri features as if they were imminent. in 2024 and then I had to admit that it really wasn’t the case. It’s now 2026 and we’re still waiting.
Siri powered by Gemini
The first good news came earlier this year, when Apple confirmed that it was partnering with Google to use Gemini AI models to power future Siri features.
Google’s Gemini model delivers exactly the kind of personalized intelligence Apple has long promised. The big difference was that Google didn’t just offer great promotional videos: instead, the company launched a beta version of its Personal Intelligence feature.
Personal Intelligence can retrieve specific details from text, photos or videos in your Google apps to personalize Gemini responses. This includes Google Workspace (Gmail, Calendar, Drive, etc.), Google Photos, your YouTube watch history, and all the different Google search services you’ve used (Search, Shopping, News, Maps, Google Flights, and Hotels). The Apple version will of course pull information from Apple apps like Mail, Calendar, Photos and Notes.
Moving from Apple’s vaporware to demonstrable services was a huge step on the path to actually delivering the new Siri.
A choice of third-party AI models
A Bloomberg A report last month indicated that Siri would be able to integrate with other third-party AI chatbot applications, and a follow-up yesterday provided more details.
This will allow iPhone users to choose from several third-party models from companies like Google and Anthropic, including the ability to set custom voices in Siri based on which external model responds (…)
For example, Google and Anthropic could add support for this extension system to the Gemini and Claude apps, respectively. Then, users could choose to use these templates to power features like Siri, writing tools, etc.
This is excellent news for three reasons.
First of all and obviously, we are not dependent on the progress made by Apple Intelligence. The Cupertino company may or may not look to power all of its AI features using internal models, but either way, we don’t have to wait for that to happen.
Second, many of us have our own personal preferences for AI chatbots and have built a history with them, which serves as context for further interactions. My current favorite AI app, for example, is Claude, and I was impressed enough with it to recently convert my monthly subscription to an annual subscription.
Of course, it suffers from all the generic AI chatbot problems that half a dozen commenters write about furiously after reading the paragraph above. I always verify the claimed facts by looking at the source links he provides, but I find it very useful for things like brainstorming ideas.
Third, competition among AI chatbots is one of the main factors for their improvement. Each time one of the actors introduces a new model, it encourages others to catch up – and ideally exceed – the new capabilities introduced. Having the flexibility to choose our model at will among competing suppliers is ideal.
Yes, if Apple ever releases its own AI models that are competitive with those offered by the current major players, I will probably choose to use them. The company should be able to integrate more deeply into its own apps and services, and I trust its privacy promises more than those of other players. But I will be very happy to have the choice.
What is your point of view? Please share your thoughts in the comments.
FTC: We use automatic, revenue-generating affiliate links. More.