| |
| ▲ | sorenjan 7 hours ago | parent | next [-] | | But that's not something you'd use an LLM for. There have been computer vision systems sorting bad peas for more than a decade[0], of course there are plenty of use cases for very fast inspection systems. But when would you use an LLM for anything like that? [0] https://www.youtube.com/watch?v=eLDxXPziztw | | |
| ▲ | arcanemachiner 2 hours ago | parent | next [-] | | Nobody said you would use an LLM for that. It's an example of a process where "industrial inspection, in particular, [would] benefit from lower latency in exchange for accuracy". The point of their comment isn't that you would use an LLM to sort fruit. It was just an illustrative example. | | |
| ▲ | sorenjan an hour ago | parent [-] | | The discussion was about fine-tuned Qwen models, not industrial inspection in general. I would also find it interesting to learn about what kind of edge AI industrial inspection task you could do with fine-tuned llms, not some handwavy answer about how sometimes latency is important in real time systems. Of course it is, so generally you don't use models with several billion parameters unless you need to. | | |
| ▲ | arcanemachiner 3 minutes ago | parent [-] | | The thread you're in broke away from the main discussion topic. Again: Nobody is using LLMs to (for example) sort fruit. But there are some industrial processes that prioritize latency over reliability. |
|
| |
| ▲ | 0xbadcafebee 4 hours ago | parent | prev [-] | | You would use a VLM (vision language model). The model analyzes the image and outputs text, along with general context, that can drive intelligent decisions. https://tryolabs.com/blog/llms-leveraging-computer-vision |
| |
| ▲ | embedding-shape 8 hours ago | parent | prev [-] | | But why would I want to results to be done faster but less reliable, vs slower and more reliable? Feels like the sort of thing you'd favor accuracy over speed, otherwise you're just degrading the quality control? | | |
| ▲ | CamouflagedKiwi 3 hours ago | parent | next [-] | | It's not that you want it to be faster, but you want the latency to be predictable and reliable, which is much more the case for local inference than sending it away over a network (and especially to the current set of frontier model providers who don't exactly have standout reliability numbers). | |
| ▲ | bigyabai 8 hours ago | parent | prev | next [-] | | The high-nines of fruit organization are usually not worth running a 400 billion parameter model to catch the last 3 fruit. | |
| ▲ | 0cf8612b2e1e 7 hours ago | parent | prev [-] | | Local, offline system you control is worth a lot. Introducing an external dependency guarantees you will have downtime outside of your control. | | |
| ▲ | embedding-shape 5 hours ago | parent [-] | | Right, but that doesn't answer why you'd need a fast 7b LLM rather than a slightly less fast 14b LLM. | | |
| ▲ | 0cf8612b2e1e 4 hours ago | parent | next [-] | | In the hypothetical fruit sorting example, if you have a hard budget of 10 msec to respond and the 7B takes 8 msec and the 14B takes 12msec, there is your imaginary answer. Regular engineering where you have to balance competing constraints instead of running the biggest available. | |
| ▲ | 0xbadcafebee 4 hours ago | parent | prev | next [-] | | ....because sometimes people need a faster answer? There's many possible reasons someone might need speed over accuracy. In the food sorting example, if lower accuracy means you waste more peanuts, but the speed means you get rid of more bad peanuts overall, then you get fewer complaints about bad peanuts, with a tiny amount of extra material waste. | |
| ▲ | jwatte 3 hours ago | parent | prev [-] | | Hard real time is a thing in some systems.
Also, the current approaches might have 85% accuracy -- if the LLM can deliver 90% accuracy while being "less exact" that's still a win! |
|
|
|
|