Remix.run Logo
Apple Can Create Smaller On-Device AI Models from Google's Gemini(macrumors.com)
27 points by thm a day ago | 10 comments
jerrythegerbil a day ago | parent | next [-]

The announcement of FunctionGemma, the announcement of Apple partnering with Google’s Gemini, and now Apple can create smaller on-device AI models.

It’s been clear since December of last year what the planned trajectory and partnerships would be.

MBCook a day ago | parent | prev | next [-]

Wouldn’t it be interesting if Apple provided different models to different iPhones?

So due to hardware capabilities the iPhone 20 Pro gets an X billion parameter version but the regular 20 gets only gets (2/3 * X) billion?

That would provide an interesting point of hardware differentiation between the regular and pro models, as well as between each model year.

nullpoint420 a day ago | parent | next [-]

I could 100% see this, and ironically it makes sense. I can totally envision an Apple exec announcing this at a keynote.

“We’re proud to announce that the iPhone 21 is our most performant iPhone yet - capable of running models of up to 20 billion parameters. That’s over 2x the amount on iPhone 20.”

Or something like that.

MBCook a day ago | parent | next [-]

If you could find a good way to communicate it to people that they would believe (X billion whatever is pretty abstract) it could also really help with upgrades.

All of us know phones are basically fast enough and have been for a long time. The screens are already great. The cameras are great. It’s gotten harder and harder to get people to break their cycle of when they upgrade.

I don’t work in AI, I don’t know the parameter thing well myself. Like I know what it is abstractly, but I have no idea if doubling the number makes things 0.3% better, 12% better, or 2000% better. You could try to turn it into just some generic benchmark like the old megahertz race of “bitness” of consoles. But I suspect it means about as much to the average person as saying how many BOGOMIPS a phone has.

illwrks 15 hours ago | parent [-]

Air model - ditsy and air headed, prone to exaggerate. Standard model - does enough of what you need, no bells and whistles, less of an airhead. Pro model - for professionals, serious and trustworthy

kingleopold a day ago | parent | prev | next [-]

I see %100 that model connected to siri and siri being siri in 5 years is the reality here. Would be incredible claude reaches to AGI and siri with all local hardware and local LLM just can't do few things right.

adampunk a day ago | parent | prev [-]

Feeds and speeds, just like Steve always said were crucial to put in sales comms! /s

a day ago | parent | prev [-]
[deleted]
ZeroGravitas a day ago | parent | prev | next [-]

Does it make sense for a single model to be used for all on device LLM tasks or for each app to provide its own customized one?

My gut feel is the former but not sure if that's actually true.

wenldev a day ago | parent | prev [-]

[dead]