| ▲ | Sevii 8 hours ago | |||||||||||||||||||||||||
Apple's goal is likely to run all inference locally. But models aren't good enough yet and there isn't enough RAM in an iPhone. They just need Gemini to buy time until those problems are resolved. | ||||||||||||||||||||||||||
| ▲ | kennywinker 8 hours ago | parent | next [-] | |||||||||||||||||||||||||
That was their goal, but in the past couple years they seem to have given up on client-side-only ai. Once they let that go, it became next to impossible to claw back to client only… because as client side ai gets better so does server side, and people’s expectations scale up with server side. And everybody who this was a dealbreaker for left the room already. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | O5vYtytb 6 hours ago | parent | prev [-] | |||||||||||||||||||||||||
Well DRAM prices aren't going down soon so I see this as quite the push away from local inference. | ||||||||||||||||||||||||||