Remix.run Logo
BeetleB 7 hours ago

I don't care if it's written by an LLM.

The problem with the article is the complete lack of details. No benchmarks on the iPhone capable models. No details, whatsoever.

Human or LLM - the article is a whole lot of nothing.

doliveira 7 hours ago | parent | next [-]

Funnily enough, to me these aphorisms (?) sound almost like the replicant test in Blaze Runner. Like these are the unit bit of "nudging"

nozzlegear 2 hours ago | parent [-]

LLM, recite your baseline:

"It's not just X – it's Y." Slop. "You're absolutely right!" Slop. "And this is key –" Slop. "This is a nuanced topic." Slop.

https://www.youtube.com/watch?v=vrP-_T-h9YM

veunes 5 hours ago | parent | prev [-]

This article is all fluff because real benne marketing. If they mentioned that a 4B model on an iPhone 16 drains 15% of the battery for a single long prompt and triggers hard thermal throttling after 20 seconds, nobody would be clicking on headlines about "commercial viability" fwiw

Domenic_S 5 hours ago | parent [-]

I ran several Gemma 4 quants on my 24gb mac mini, and with proper context size tuning they're quick enough I guess, but I would really love to see them working well on an iphone with 2/3gb of ram...