| ▲ | brandensilva 3 days ago | |
As you get deeper beyond the starter and bootstrap code it definitely takes a different approach to get value. This is in part because context limits of large code bases and because the knowledge becomes more specialized and the LLM has no training on that kind of code. But people are making it work, it just isn't as black and white. | ||
| ▲ | bonesss 3 days ago | parent [-] | |
That’s the issue, though, isn’t it? Why isn’t it black and white? Clear massive productivity gains at Google or MS and their dev armies should be visible from orbit. Just today on HN I’ve seen claims of 25x and 10x and 2x productivity gains. But none of it starting with well calibrated estimations using quantifiable outcomes, consistent teams, whole lifecycle evaluation, and apples to apples work. In my own extensive use of LLMs I’m reminded of mouse versus command line testing around file navigation. Objective facts and subjective reporting don’t always line up, people feel empowered and productive while ‘doing’ and don’t like ‘hunting’ while uncertain… but our sense of the activity and measurable output aren’t the same. I’m left wondering why a 2x Microsoft of OpenAI would ever sell their competitive advantage to others. There’s infinite money to be made exploiting such a tech, but instead we see highschool homework, script gen, and demo ware that is already just a few searches away and downloadable. LLMs are in essence copy and pasting existing work while hopping over uncomfortable copyright and attribution qualms so devs feel like ‘product managers’ and not charlatans. Is that fundamentally faster than a healthy stack overflow and non-enshittened Google? Over a product lifecycle? … ‘sometimes, kinda’ in the absence of clear obvious next-gen production feels like we’re expecting a horse with a wagon seat built in to win a Formula 1 race. | ||