Remix.run Logo
TheOtherHobbes 9 hours ago

It's a very low baseline with Siri, so almost anything would be an improvement.

anamexis 7 hours ago | parent | next [-]

The point is that once Siri is switched to a Gemini-based model, the baseline presumably won't be low anymore.

brokencode 4 hours ago | parent | next [-]

I’m not so sure. Just think about coding assistants with MCP based tools. I can use multiple different models in GitHub Copilot and get good results with similarly capable models.

Siri’s functionality and OS integration could be exposed in a similar, industry-standard way via tools provided to the model.

Then any other model can be swapped in quite easily. Of course, they may still want to do fine tuning, quantization, performance optimization for Apple’s hardware, etc.

But I don’t see why the actual software integration part needs to be difficult.

inferiorhuman 2 hours ago | parent | prev [-]

Doubt it. Of all the issues I run into with Siri none could be solved by throwing AI slop at it. Case in point: if I ask Siri to play an album and it can't match the album name it just plays some random shit instead of erroring out.

andy_ppp 43 minutes ago | parent [-]

Um if I ask an LLM about a fake band it literally say I couldn't find any songs by the fake band did you type is correctly and it's about a millions times more likely to guess correctly. Why do you say it doesn't solve loads of things? I'm more concerned about the problems it creates (prompt injection, hallucinations in important work, bad logic in code), the actual functionality will be fantastic compared to Siri right now!

inferiorhuman 2 minutes ago | parent [-]

  Why do you say it doesn't solve loads of things? 
Because I'm sitting here twiddling my thumbs waiting for random pages to go through their anti-LLM bot crap. LLMs create more problems than they solve.

  Um if I ask an LLM about a fake band it literally say I couldn't find any
  songs by the fake band did you type is correctly and it's about a millions
  times more likely to guess correctly
Um if Apple wrote proper error handling in the first place the issue would be solve without LLM baggage. Apple made a conscious decision to handle "unknown" artists this way, LLMs don't change that.
eastbound 8 hours ago | parent | prev [-]

Ollama! Why didn’t they just run Ollama and a public model! They’ve kept the last 10 years with a Siri who doesn’t know any contact named Chronometer only to require the best in class LLM?

JumpCrisscross 40 minutes ago | parent | next [-]

> Why didn’t they just run Ollama and a public model

Same reason they switched to Intel chips in the 2000s. They were better. Then Cupertino watched. And it learned. And it leapfrogged.

If I were Google, my fear would be Apple launching and then cutting the line at TSMC to mass produce custom silicon in the 2030s.

crazygringo 4 hours ago | parent | prev | next [-]

I'm genuinely curious about this too. If you really only need the language and common sense parts of an LLM -- not deep factual knowledge of every technical and cultural domain -- then aren't the public models great? Just exactly what you need? Nobody's using Siri for coding.

Are there licensing issues regarding commercial use at scale or something?

macNchz 2 hours ago | parent [-]

Pure speculation, but I’d guess that an arrangement with Google comes with all sorts of ancillary support that will help things go smoothly: managed fine tuning/post-training, access to updated models as they become available, safety/content-related guarantees, reliability/availability terms so the whole thing doesn’t fall flat on launch day etc.

andy_ppp 40 minutes ago | parent [-]

Probably repeatability and privacy guarantees around infrastructure and training too. Google already have very defined splits for their Gemma and in house models with engineers and researchers rarely communicating directly.

chankstein38 8 hours ago | parent | prev [-]

The other day I was trying to navigate to a Costco in my car. So I opened google maps on Android Auto on the screen in my car and pressed the search box. My car won't allow me to type even while parked... so I have to speak to the Google Voice Assistant.

I was in the map search, so I just said "Costco" and it said "I can't help with that right now, please try again later" or something of the sort. I tried a couple more times until I changed up to saying "Navigate me to Costco" where it finally did the search in the textbox and found it for me.

Obviously this isn't the same thing as Gemini but the experience with Android Auto becomes more and more garbage as time passes and I'm concerned that now we're going to have 2 google product voice assistants.

Also, tbh, Gemini was great a month ago but since then it's become total garbage. Maybe it passes benchmarks or whatever but interacting with it is awful. It takes more time to interact with than to just do stuff yourself at this point.

I tried Google Maps AI last night and, wow. The experience was about as garbage as you can imagine.

woah 7 hours ago | parent [-]

Siri on my Apple Home will default to turning off all the lights in the kitchen if it misunderstands anything. Much hilarity ensues

antod an hour ago | parent [-]

Share and Enjoy