▲ | kody a day ago | ||||||||||||||||||||||
It's 60-80% as good as Stack Overflow copy-pasting programmers, sure, but those programmers were already providing questionable value. It's nowhere near as good as someone actually building and maintaining systems. It's barely able to vomit out an MVP and it's almost never capable of making a meaningful change to that MVP. If your experiences have been different that's fine, but in my day job I am spending more and more time just fixing crappy LLM code produced and merged by STAFF engineers. I really don't see that changing any time soon. | |||||||||||||||||||||||
▲ | lumenwrites a day ago | parent [-] | ||||||||||||||||||||||
I'm pretty good at what I do, at least according to myself and the people I work with, and I'm comparing its capabilities (the latest version of Claude used as an agent inside Cursor) to myself. It can't fully do things on its own and makes mistakes, but it can do a lot. But suppose you're right, it's 60% as good as "stackoverflow copy-pasting programmers". Isn't that a pretty insanely impressive milestone to just dismiss? And why would it just get to this point, and then stop? Like, we can all see AIs continuously beating the benchmarks, and the progress feels very fast in terms of experience of using it as a user. I'd need to hear a pretty compelling argument to believe that it'll suddenly stop, something more compelling than "well, it's not very good yet, therefore it won't be any better", or "Sam Altman is lying to us because incentives". Sure, it can slow down somewhat because of the exponentially increasing compute costs, but that's assuming no more algorithmic progress, no more compute progress, and no more increases in the capital that flows into this field (I find that hard to believe). | |||||||||||||||||||||||
|