| ▲ | birdsongs 2 hours ago | |||||||
> LLM's are notoriously bad at this. The noise to signal ratio is unacceptably high I could go either way on the future of this, but if you take the argument that we're still early days, this may not hold. They're notoriously bad at this so far. We could still be in the PC DOS 3.X era in this timeline. Wait until we hit the Windows 3.1, or 95 equivalent. Personally, I have seen shocking improvements in the past 3 months with the latest models. | ||||||||
| ▲ | kombookcha 2 hours ago | parent | next [-] | |||||||
Personally I strongly doubt it. Since the nature of LLM's does not allow them semantic content or context, I believe it is inherently a tool unsuited for this task. As far as I can tell, it's a limitation of the technology itself, not of the amount of power behind it. Either way, being able to generate or compress loads of text very quickly with no understanding of the contents simply is not the bottleneck of information transfer between human beings. | ||||||||
| ||||||||
| ▲ | mcny 2 hours ago | parent | prev [-] | |||||||
I would like to see the day when the context size is in gigabytes or tens of billions of tokens, not RAG or whatever, actual context. | ||||||||