| ▲ | afavour 6 hours ago | |
I still think/hope/pray the future will be on-device models that don't need constant retraining. That will blow up the existing business model but I think a company could still make good money with a "majority local/remote for the really challenging stuff" model. The problem is that today's AI companies have taken on so much funding that a reasonable, not crazy profit ratio isn't enough for them. | ||
| ▲ | parliament32 5 hours ago | parent | next [-] | |
The future is already pretty much here. Note the recent stories about Chrome adding a local model, not to mention the Googlebook demo (if it works as advertised, there's a 0% chance you could get that kind of latency with a non-local model). | ||
| ▲ | davidw 3 hours ago | parent | prev [-] | |
If it continues to be a numbers game - the more resources you throw at it, the better it is - then on-device is always going to be not as good. I guess it might be good enough for some uses? I kind of loathe the move away from a world where we could control our own computers and run our own software on them. | ||