| ▲ | keeda a day ago | |
It does, but I have no mental model of what would be required to efficiently coordinate a bunch of independently operating agents, so it's hard to make a judgement. Also about half of it seems to be tests. It even has performance benchmarks, which are always an distant afterthought for anything other than infrastructure code in the hottest of loops! https://github.com/steveyegge/beads/blob/main/BENCHMARKS.md This is one of the defining characteristics of vibe-coded projects: Extensive tests. That's what keeps the LLMs honest. I had commented previously (https://news.ycombinator.com/item?id=45729826) that the logical conclusion of AI coding will look very weird to us and I guess this is one glimpse of it. | ||