| ▲ | maplethorpe 4 hours ago | ||||||||||||||||
I'm waiting for Anthropic to realise they can just set a few thousand agents loose to do just that, and monopolize the entire software market overnight. I'm not sure why they haven't done this yet. | |||||||||||||||||
| ▲ | slfnflctd 3 hours ago | parent | next [-] | ||||||||||||||||
You jest, but it's a good question. When people talk about the 'plateau of ability' agents are widely expected to reach at some point, I suspect a lot of it will boil down to skyrocketing costs and plummeting accuracy past a certain point of number of agents involved. This seems to me like a much harder limit than context windows or model sizes. Things like Gas Town are exploring this in what you might call a reckless way; I'm sure there are plenty of more careful experiments being conducted. What I think the ultimate measure of this new tech will be is, how simple of a question can a human put to an LLM group for how complex of a result, and how much will they have to pay for it? It seems obvious to me there is a significant plateau somewhere, it's just a question of exactly where. Things will probably be in flux for a few years before we have anything close to a good answer, and it will probably vary widely between different use cases. | |||||||||||||||||
| ▲ | steveBK123 4 hours ago | parent | prev [-] | ||||||||||||||||
Because a lot of valuable software is the implicit / organizational / human domain knowledge .. not the trillions of lines of code LLms all scraped and trained on. | |||||||||||||||||
| |||||||||||||||||