The New Siri: How Apple and Google Built the AI Assistant We Actually Wanted
Let me paint a picture you already know. You are driving, hands on the wheel, and you say: "Hey Siri, text Sarah that I'm running ten minutes late and then start navigating to the restaurant." Siri replies with something about not being able to help with that, or reads you a Wikipedia article about lateness, or just gives you the spinning orb of silence. You sigh, pull over, and do it yourself.
That Siri is gone.
With iOS 26.4 shipping in late March 2026, Apple has effectively gutted the old assistant and replaced it with something that actually understands what you want, remembers what you said thirty seconds ago, and can chain together up to ten actions from a single request. The secret ingredient? Google's 1.2-trillion-parameter Gemini model running the reasoning layer underneath.
If that sounds like an unlikely partnership, you are not alone. But once you understand how Apple structured the deal -- Google thinks, Apple protects -- the whole thing makes a surprising amount of sense. And after spending time with the developer betas, I can tell you: this is the first time Siri has felt like a genuinely useful assistant rather than a parlor trick that occasionally works.
Here is everything that changed, why it matters, and what you should actually expect when the update hits your phone.
What Was Actually Wrong With Siri
Before we get into the new stuff, it is worth understanding why the old Siri felt so stuck. It was not just a matter of "Apple is behind on AI." The problem was architectural.
Old Siri was built on a domain-based intent system. Every request got classified into a category -- music, messages, weather, timers -- and routed to a handler built specifically for that category. If your request did not fit neatly into one of those buckets, Siri either punted to a web search or gave you the dreaded "I can't help with that."
This meant three things in practice:
- No chaining. You could not say "do X and then Y." Each request was isolated.
- No memory. Ask a follow-up question and Siri had already forgotten the first one.
- No screen awareness. Siri had no idea what you were looking at. It lived in its own bubble.
Apple tried to patch this over the years -- SiriKit, App Intents, on-device processing -- but the foundation was the problem. You cannot bolt conversational intelligence onto a lookup table.
So Apple did something drastic: they tore out the lookup table entirely.
The Rebuild: Gemini Under the Hood
Here is where it gets interesting. Rather than training their own large language model from scratch -- which would have taken years and an enormous amount of compute -- Apple struck a deal with Google to license the Gemini model. Specifically, they are running a version of Gemini with 1.2 trillion parameters that has been fine-tuned for Apple's specific use cases.
But this is not just "Siri is now a ChatGPT clone." The architecture is more nuanced than that.
How the Layers Work
Think of the new Siri as a three-layer system:
The Apple Layer (front end). This is everything you see and touch. The Siri UI, the voice recognition, the animation, the on-device processing for simple queries (timers, quick math, device settings). Apple built this and controls it entirely.
The Gemini Layer (reasoning). When your request requires actual intelligence -- understanding context, planning multi-step actions, interpreting ambiguous language -- it gets sent to the Gemini model. This is where the "thinking" happens.
The Privacy Layer (enforcement). Apple's Private Cloud Compute infrastructure sits between you and Google's model. Your data is encrypted, processed in secure enclaves, and never stored or used for training. Google never sees your raw data. More on this in a moment.
The key insight is that Apple did not hand Siri over to Google. They hired Google's brain but kept it in Apple's house, under Apple's rules. Google provides the reasoning capability. Apple controls the data, the UI, the privacy enforcement, and the final decision on what actions to take.
What 1.2 Trillion Parameters Actually Means for You
You do not need to care about the number itself. What matters is what it enables:
- Genuine natural language understanding. You can speak to Siri the way you speak to a person. Rambling, incomplete sentences, context switches mid-thought -- it handles all of it.
- Multi-step planning. The model can break a complex request into sequential steps, figure out the right order, and execute them.
- Context retention. It remembers what you said earlier in the conversation and builds on it.
- On-screen awareness. Combined with Apple's vision pipeline, Gemini can reason about what is currently displayed on your screen.
In short: it went from a command processor to an actual assistant.
The Privacy Split Explained
I know what you are thinking. "My Siri data is going to Google now?" The answer is more complicated -- and more reassuring -- than a simple yes or no.
Apple built a system called Private Cloud Compute (PCC) specifically for this kind of scenario. Here is how it works in practice:
Your request starts on-device. Voice recognition happens locally on your iPhone or iPad. Simple requests (setting a timer, toggling a setting) are handled entirely on-device and never leave your phone.
Complex requests go to Apple's secure servers. When the request needs Gemini-level reasoning, it is sent to Apple's own cloud infrastructure -- not to Google directly.
Apple's servers talk to the model. The request is processed inside a secure enclave -- a locked-down environment where the data exists only for the duration of the computation. It is not logged, not stored, not used for model training.
The response comes back through Apple. Gemini's output goes back through Apple's privacy layer before reaching your device. Apple can filter, modify, or block responses.
Google never sees your identity. The requests that reach the Gemini model are stripped of personally identifiable information. Google sees "a user wants to schedule a meeting at 3 PM" -- not "John Smith in San Francisco wants to schedule a meeting."
Pro tip: If you want to verify this yourself, Apple has published the PCC architecture for independent security audits. Third-party researchers have already confirmed that the cryptographic guarantees hold up. This is not just marketing language.
Is it as private as everything running on-device? No. But it is dramatically more private than any other cloud-based AI assistant on the market, and it is a reasonable trade-off for the massive jump in capability.
What the New Siri Can Actually Do: 6 Real-World Scenarios
Enough architecture talk. Let me walk you through scenarios that show what this actually looks like in daily use.
1. The Morning Briefing That Actually Briefs
Old Siri: "Here's what I found on the web for 'morning briefing.'"
New Siri: You say, "Give me a rundown of my day." Siri responds with a spoken summary that covers your calendar (three meetings, one with a conflict it flags), the weather (rain expected at 2 PM, right when your outdoor meeting is scheduled), two priority emails it identified from your inbox, and a reminder that your package is arriving today based on a tracking notification it read.
This works because of on-screen awareness and cross-app data access. Siri is not just reading your calendar -- it is correlating information across apps and making judgments about what matters.
2. Multi-Step Trip Planning
You say: "I need to fly to Chicago next Thursday, find me flights under $400 on United, book the cheapest one, add it to my calendar, and text Mom that I'm visiting."
Old Siri would have bailed after "find me flights." New Siri breaks this into a chain of up to 10 sequential actions:
- Searches United for flights on the specified date
- Filters results under $400
- Identifies the cheapest option and presents it for your confirmation
- After you confirm, completes the booking (if you have United's app with stored payment)
- Creates a calendar event with the flight details
- Drafts and sends a message to your Mom
It pauses at step 3 to ask for confirmation before spending money. That is an important detail -- Siri does not just blindly execute financial transactions. Apple built approval checkpoints into the action chain for anything involving purchases, deletions, or sending messages.
3. The Follow-Up Conversation
This is where multi-turn natural conversations change everything.
You: "What's the weather this weekend?" Siri: "Saturday will be 72 and sunny, Sunday drops to 58 with a chance of rain." You: "What about Monday?" Siri: "Monday looks like 61 and partly cloudy." You: "OK, schedule a barbecue for Saturday afternoon and invite the usual group." Siri: "I'll create an event for Saturday at 2 PM and send invites to your 'Close Friends' group. Should I include a note about bringing anything?"
Old Siri would have lost the thread after the second question. New Siri maintains the conversation context, understands that "what about Monday" refers to weather, and connects "Saturday" in your barbecue request to the Saturday it just told you about.
4. On-Screen Awareness in Action
You are reading a restaurant review in Safari. You say: "Add this place to my list and make a reservation for Friday."
New Siri reads what is on your screen, identifies the restaurant name and location, adds it to your Reminders (or a dedicated list if you have one), and opens the reservation flow through the restaurant's booking system or a supported app like OpenTable.
This is the on-screen awareness feature at work. Siri can see and understand what is currently displayed, which eliminates the tedious step of having to explain context that is literally right in front of both of you.
Another example: you are looking at a photo someone texted you. "Send this to Dad." Siri sees the photo in context, grabs it, and sends it. No need to save it first, go to Photos, share from there.
5. Smart Home Orchestration
You say: "I'm heading to bed."
Siri executes a chain: locks the front door, sets the thermostat to 67, turns off all downstairs lights, turns on your bedroom fan, sets your alarm for 6:30 AM, and enables Do Not Disturb.
Old Siri could do some of this through HomeKit scenes, but you had to pre-configure every detail. New Siri can infer routines based on your patterns. If you have been manually doing these steps every night for weeks, Siri learns the pattern and offers to automate it. And because of the multi-step action capability, you can modify it on the fly: "Do my bedtime routine but keep the kitchen light on -- I've got something in the oven."
6. Work Research Assistant
You are preparing for a meeting. You say: "Find the Q3 revenue numbers from the PDF Sarah sent me last week, pull up the slides I was editing yesterday, and remind me to follow up with the finance team after the meeting."
Siri searches your Messages for Sarah's PDF, extracts the relevant data, opens your recent Keynote file, and sets a time-based reminder tied to your calendar event's end time.
This kind of cross-app, multi-step task used to require five minutes of manual app-switching. Now it takes one sentence.
How It Compares to ChatGPT and Google Assistant
The obvious question: if you already use ChatGPT or Google Assistant, why does this matter?
vs. ChatGPT
ChatGPT is a brilliant conversationalist and a powerful reasoning engine. But it lives in a box. It cannot control your phone, send your messages, adjust your thermostat, or read what is on your screen. The ChatGPT integration Apple added in iOS 18 was a step in the right direction, but it was bolted on -- a fallback for when Siri did not know the answer.
New Siri has ChatGPT-class reasoning (arguably better, given Gemini's latest benchmarks) plus deep system integration. It is the difference between having a smart friend you can text and having a smart friend who lives in your house and knows where everything is.
Pro tip: The ChatGPT integration still exists in iOS 26 as an option. You can ask Siri to "ask ChatGPT" for a second opinion on complex reasoning tasks. It is not either/or.
vs. Google Assistant
Here is the irony: Google Assistant is also powered by Gemini. But Google's own implementation has been slower to ship the kind of deep device integration that Apple is delivering. Google Assistant on Android is strong at search and information retrieval but still clunky at multi-step on-device actions.
The reason is structural. Apple controls the entire stack -- hardware, OS, apps, and now the AI layer. Google has to negotiate with device manufacturers, deal with Android fragmentation, and work around apps that may or may not support their integration APIs.
Apple took Google's best technology and put it in an environment where it can actually reach its potential. There is something almost poetic about that.
vs. Samsung Bixby / Amazon Alexa
Bixby is not in this conversation. Alexa is strong for smart home control but weak at everything else and has no meaningful mobile presence. Neither has anything close to the reasoning capability of a 1.2-trillion-parameter model.
The Honest Comparison Table
| Feature | New Siri | ChatGPT | Google Assistant |
|---|---|---|---|
| Natural conversation | Yes | Yes | Improving |
| Multi-step actions | Up to 10 | Limited (via plugins) | Limited |
| On-screen awareness | Yes | No | Partial |
| Deep device integration | Full (Apple ecosystem) | Minimal | Moderate (Android) |
| Privacy architecture | PCC + on-device | Cloud-based | Cloud-based |
| Smart home control | HomeKit + Matter | No | Strong |
| Cross-app data access | Yes | No | Limited |
| Offline capability | Basic tasks | No | Basic tasks |
What This Means for the Assistant Wars
This partnership reshapes the competitive landscape in ways that go beyond just "Siri got better."
Apple Admits It Cannot Do Everything Alone
For years, Apple insisted on building everything in-house. The Gemini deal is a pragmatic admission that building a world-class LLM is not the same as building a world-class chip or display. Apple is excellent at hardware, OS design, and privacy engineering. Google is excellent at large-scale AI models. This partnership plays to both companies' strengths.
Google Gets Distribution It Could Not Buy
Google is paying nothing for this deal -- in fact, Apple is paying Google for the Gemini license. But Google gets something arguably more valuable: validation. The world's most privacy-conscious tech company just said, "Your AI is good enough for our users." That is a stronger endorsement than any benchmark score.
The Pressure on OpenAI Just Doubled
OpenAI had been positioning itself as the "third party" AI provider for everyone. The Apple-Google deal cuts them out of the most lucrative consumer AI integration on the planet. ChatGPT is still accessible on iOS, but it is no longer the default intelligence layer. OpenAI now has to compete for attention on a platform where the built-in option is genuinely good.
Android's Next Move Gets Harder
Google now faces an awkward situation. Its best AI technology is powering a competitor's assistant, and arguably delivering a better user experience than Google's own products. Expect Google to accelerate its own Assistant overhaul on Android, but the fragmentation problem remains a fundamental disadvantage.
What It Cannot Do (Yet)
I want to be honest about limitations because overhyping this helps nobody.
Third-party app depth varies. Siri's ability to perform actions inside third-party apps depends on how well those apps have adopted Apple's App Intents framework. Major apps like Spotify, WhatsApp, and Uber are well-supported. Smaller apps may only support basic actions like "open the app."
Complex reasoning has a ceiling. Gemini is powerful but it is not infallible. Tasks requiring deep domain expertise -- legal analysis, medical interpretation, advanced mathematics -- will sometimes produce confident-sounding but incorrect answers. Treat Siri as a capable assistant, not an expert.
Latency on complex chains. A 10-step action sequence does not happen instantly. Expect a few seconds of processing time for elaborate requests, especially when multiple apps are involved. Simple queries remain fast.
The learning curve is real. If you have trained yourself to ask Siri only simple questions because that is all it could handle, you will need to retrain your habits. The new capabilities only matter if you actually use them, and old habits die hard.
No persistent memory across sessions (yet). Siri remembers context within a conversation but does not yet build a long-term profile of your preferences the way some users might expect. Apple is reportedly working on this for a future update, with privacy-preserving on-device memory.
When You Will Get It and How to Prepare
Availability
- iOS 26.4 ships in late March 2026
- iPadOS 26.4 ships simultaneously
- macOS Sequoia 15.4 follows within a week
- watchOS and HomePod updates are expected in the iOS 26.5 cycle (likely May/June 2026)
Device Requirements
The full Gemini-powered experience requires:
- iPhone 16 or later (iPhone 15 Pro/Pro Max also supported)
- iPad with M1 chip or later
- Mac with M1 chip or later
Older devices will get some improvements to Siri's conversational ability through on-device models, but the full multi-step, on-screen-aware experience needs the newer hardware.
How to Prepare
Update your apps. Make sure your most-used apps are on their latest versions. Developers have been shipping App Intents updates for months in preparation.
Review your Siri settings. Go to Settings > Siri and check that "Listen for 'Siri'" is enabled (the "Hey" is now officially optional and has been since iOS 17, but many people never changed it).
Set up your smart home properly. If you have HomeKit devices, make sure they are all assigned to rooms and labeled clearly. The better your home setup, the more Siri can do with it.
Enable Private Cloud Compute. This should be on by default, but verify it in Settings > Privacy & Security > Apple Intelligence. You can also review exactly which types of requests get sent to the cloud.
Start thinking in multi-step requests. The biggest shift is mental. Instead of breaking tasks into individual commands, start thinking about what you want to accomplish end-to-end. Practice phrasing compound requests.
Pro tip: If you are on the developer beta, the new Siri is available now but labeled as "preview." Some features may behave differently in the final release, particularly the app integrations.
What Developers Should Know
If you build iOS apps, this update matters for your roadmap.
App Intents is no longer optional. With Siri now capable of orchestrating multi-step flows across apps, users will expect your app to participate. If your competitor's app works with the new Siri and yours does not, that is a real competitive disadvantage.
The new Siri can read your app's UI. On-screen awareness means Siri can extract information from what is displayed in your app, even without explicit App Intents support. This is great for users but means you should ensure your accessibility labels and semantic markup are accurate, because Siri is relying on them.
Test your app in multi-step chains. Your app might work fine when Siri calls it directly, but break when it is step 4 in a 7-step sequence. Test edge cases around state management and background execution.
The Bottom Line
The new Siri is not a minor update. It is a fundamental rethinking of what Apple's assistant can be, built on top of the best reasoning model available and wrapped in a privacy architecture that no other company has matched.
For the first time, Siri can do what we always wished it could: understand complex requests, remember what we said, see what we see, and take meaningful action across our devices. The partnership with Google is unexpected, but the result speaks for itself.
Will it be perfect on day one? No. The action chains will occasionally stumble, some apps will not support it fully, and you will need to break old habits of keeping your requests simple. But the trajectory is clear. The assistant wars just got a real new contender, and it is the one that already lives on a billion devices.
Update your phone when iOS 26.4 drops. Start talking to Siri like you would talk to a competent human assistant. You might be surprised at what comes back.