| ▲ | dgb23 5 hours ago | |||||||
> False, it creates consumer demand for inference chips, which will be badly utilised. I think the opposite is true. Local inference doesn't have to go over the wire and through a bunch of firewalls and what have you. The performance from just regular consumer hardware with local, smaller models is already decent. You're utilizing the hardware you already have. > The performance limitations are inherent to the limited compute and memory. When you plug in a local LLM and inference engine into an agent that is built around the assumption of using a cloud/frontier model then that's true. But agents can be built around local assumptions and more specific workflows and problems. That also includes the model orchestration and model choice per task (or even tool). The Jevons Paradox comes into play with using cloud models. But when you have less resources you are forced to move into more deterministic workflows. That includes tighter control over what the agent can do at any point in time, but also per project/session workflows where you generate intermediate programs/scripts instead of letting the agent just do what ever it wants. I give you an example: When you ask a cloud based agent to do something and it wants more information, it will often do a series of tool calls to gather what it thinks it needs before proceeding. Very often you can front load that part, by first writing a testable program that gathers most of the necessary information up front and only then moving into an agentic workflow. This approach can produce a bunch of .json, .md files or it can move things into a structured database or you can use embeddings or what have you. This can save you a lot of inference, make things more reusable and you don't need a model that is as capable if its context is already available and tailored to a specific task. | ||||||||
| ▲ | pama 4 hours ago | parent [-] | |||||||
Parallel inference on large compute scales in superlinear ways. There is no way to beat the reduction in memory transfers that a data-center inference model provides with hardware that fits at anything called a home. It is much more energy efficient to process huge batches of parallel requests compared to having one or a handful of queries running on an accelerator. | ||||||||
| ||||||||