| ▲ | dboreham 3 days ago | |
> What a wild and speculative claim. Is there any source for this information? Not sure it's a wild speculative claim. Claiming someone had achieved FTL travel would fall into that category. I'd call it more along the lines of exaggerated. I'll make the assumption that what I do is "advanced" (not React todo apps: Rust, Golang, distributed systems, network protocols...) and if so then I think: it's pretty much accurate. That said, this is only over the past few moths. For the first few years of LLM-dom I spent my time learning how they worked and thinking about the implications for understanding of how human thinking works. I didn't use them except to experiment. I thought my colleagues who were talking in 2022 about how they had ChatGPT write their tests were out of their tiny minds. I heard stories about how the LLM hallucinated API calls that didn't exist. Then I spent a couple of years in a place with no easy code and nobody in my sphere using LLMs. But then around six months ago I began working with people who were using LLMs (mostly Claude) to write quite advanced code so I did a "wait what??..." about-face and began trying to use it myself. What I found so far is that it's quite a bit better than I am at various unexpected kinds of tasks (finding bugs, analyzing large bodies of code then writing documentation on how it works, looking for security vulnerabilities in code) or at least it's much faster. I also found that there's a whole art to "LLM Whispering" -- how to talk to it to get it to do what you want. Much like with humans, but it doesn't try to cut corners nor use oddball tech that it wants on its resume. Anyway, YMMV, but I'd say the statement is not entirely false, and surely will be entirely true within a few years. | ||