| ▲ | runarberg 2 hours ago | |
Even so, that doesn’t take away from my point. Traditional specialized models can do these things already, for much cheaper and without expensive optimization. What traditional models cannot do is the toy aspect of LLM, and that is the only usecase I see for this technology going forward. Lets say you are right and these things will be optimized, and in, say, 5 years, most models from the big players will be able do things like reading small text in an obscure image, draw a picture of a glass of wine filled to the brim, draw a path through a maze, count the legs of a 5 footed dog, etc. And in doing so finished their last venture capital subsidies (bringing the actual cost of these to their customers). Why would people use LLMs for these when a traditional specialized model can do it for much cheaper? | ||
| ▲ | energy123 an hour ago | parent [-] | |
> Why would people use LLMs for these when a traditional specialized model can do it for much cheaper? This is not too different from where I see things going. I don't think a monolithic LLM that does everything perfectly is where we'll go. An LLM in a finite-compute universe is never going to be better at weather forecasting than GraphCast. The LLM will have a finite compute budget, and it should prioritize general reasoning, and be capable of calling tools like GraphCast to extend its intelligence into the necessary verticals for solving a problem. I don't know exactly what that balance will look like however, and the lines between specialist application knowledge and general intelligence is pretty blurred, and what the API boundaries (if any) should be are unclear to me. There's a phenomenon where capabilities in one vertical do help with general reasoning to an extent, so it's not a completely zero-sum tradeoff between specialist expertise and generalist abilities, which makes it difficult to know what to expect. | ||