| ▲ | konschubert 6 hours ago | ||||||||||||||||
I disagree with every sentence of this. > solves the problem of too much demand for inference False, it creates consumer demand for inference chips, which will be badly utilised. > also would use less electricity What makes you think that? (MAYBE you can save power on cooling. But not if the data center is close to a natural heat sink) > It's just a matter of getting the performance good enough. The performance limitations are inherent to the limited compute and memory. > Most users don't need frontier model performance. What makes you think that? | |||||||||||||||||
| ▲ | dgb23 5 hours ago | parent | next [-] | ||||||||||||||||
> False, it creates consumer demand for inference chips, which will be badly utilised. I think the opposite is true. Local inference doesn't have to go over the wire and through a bunch of firewalls and what have you. The performance from just regular consumer hardware with local, smaller models is already decent. You're utilizing the hardware you already have. > The performance limitations are inherent to the limited compute and memory. When you plug in a local LLM and inference engine into an agent that is built around the assumption of using a cloud/frontier model then that's true. But agents can be built around local assumptions and more specific workflows and problems. That also includes the model orchestration and model choice per task (or even tool). The Jevons Paradox comes into play with using cloud models. But when you have less resources you are forced to move into more deterministic workflows. That includes tighter control over what the agent can do at any point in time, but also per project/session workflows where you generate intermediate programs/scripts instead of letting the agent just do what ever it wants. I give you an example: When you ask a cloud based agent to do something and it wants more information, it will often do a series of tool calls to gather what it thinks it needs before proceeding. Very often you can front load that part, by first writing a testable program that gathers most of the necessary information up front and only then moving into an agentic workflow. This approach can produce a bunch of .json, .md files or it can move things into a structured database or you can use embeddings or what have you. This can save you a lot of inference, make things more reusable and you don't need a model that is as capable if its context is already available and tailored to a specific task. | |||||||||||||||||
| |||||||||||||||||
| ▲ | locknitpicker 5 hours ago | parent | prev | next [-] | ||||||||||||||||
> What makes you think that? The fact that today's and yesterday's models are quite capable of handling mundane tasks, and even companies behind frontier models are investing heavily in strategies to manage context instead of blindly plowing through problems with brute-force generalist models. But let's flip this around: what on earth even suggests to you that most users need frontier models? | |||||||||||||||||
| |||||||||||||||||
| ▲ | 6 hours ago | parent | prev | next [-] | ||||||||||||||||
| [deleted] | |||||||||||||||||
| ▲ | ekianjo 6 hours ago | parent | prev [-] | ||||||||||||||||
> What makes you think that? Looking at actual users of LLMs | |||||||||||||||||
| |||||||||||||||||