Remix.run Logo
libraryofbabel 6 days ago

Yeah. The empty “it’s just a statistical model” critique (or the dressed-up “stochastic parrots” version of it) is almost a sign at this point that the person using it formed their opinions about AI back when ChatGPT first came out, and hasn’t really bothered to engage with it much since then.

If in 2022 I’d tried to convince AI skeptics that in three years we might have tools on the level of Claude Code, I’m sure I’d have heard everyone say it would be impossible because “it’s just a statistical model.” But it turned out that there was a lot more potential in the architecture for encoding structured knowledge, complex reasoning, etc., despite that architecture being probabilistic. (Don’t bet against the Bitter Lesson.)

LLMs have a lot of problems, hallucination still being one of them. I’d be the first to advocate for a skeptical hype-free approach to deploying them in software engineering. But at this point we need careful informed engagement with where the models are at now rather than cherry-picked examples and rants.

seba_dos1 6 days ago | parent | next [-]

Unless what you work on is very simple and mostly mindless, using tools like Claude Code is the exact opposite of how to make the current SotA LLMs useful for coding. The models can help and boost your productivity, but it doesn't happen by letting them do more stuff autonomously. Quite the contrary.

And when what you usually work on actually is very simple and mostly mindless, you'd probably benefit more from doing it yourself, so you can progress beyond the junior stuff one day.

structural 6 days ago | parent | next [-]

Where it really has value is if what you work on is like 33% extremely difficult and 66% boilerplate/tedious. Being able to offload the tedious bits can make more senior engineers 2-3x more productive without the coordination effort of "find a junior engineer to do this, schedule their time, assign the work, follow up on it".

(The problem of course is you still need these junior engineers to exist in order to have the next generatino of senior engineers, so we must now also think about what our junior folks should be doing to be valuable AND learn).

6 days ago | parent [-]
[deleted]
anuramat 5 days ago | parent | prev [-]

"real programmers use ed", 2025 edition: you're forever stuck with "junior stuff" if you let a language model handle language

god forbid I don't have to read 10k lines of logs to fix a typo

Jensson 6 days ago | parent | prev | next [-]

> If in 2022 I’d tried to convince AI skeptics that in three years we might have tools on the level of Claude Code, I’m sure I’d have heard everyone say it would be impossible because “it’s just a statistical model.”

We already had these coding models in 2022, they were already pattern matching engines with variables back then, all your imagination needed to do to go from that to Claude code today is to give it more code examples and make it bigger.

They still can't replace even a junior engineers ability to navigate tasks over even short periods of time, just like back then they need constant handholding to get anything done. So I don't see what changed except the model being larger with more examples and therefore you can get larger chunks of coherent code out of them.

vidarh 6 days ago | parent | prev [-]

People repeating the "stochastic parrot" meme in all kinds of variations if anything appear to be more like stochastic parrots than the typical LLM is.