Remix.run Logo
caxap 5 hours ago

If this article was written a year ago, I would have agreed. But knowing what I know today, I highly doubt that the outcomes of LLM/non-LLM users will be anywhere close to similar.

LLMs are exceptionally good at building prototypes. If the professor needs a month, Bob will be done with the basic prototype of that paper by lunch on the same day, and try out dozens of hypotheses by the end of the day. He will not be chasing some error for two weeks, the LLM will very likely figure it out in matter of minutes, or not make it in the first place. Instructing it to validate intermediate results and to profile along the way can do magic.

The article is correct that Bob will not have understood anything, but if he wants to, he can spend the rest of the year trying to understand what the LLM has built for him, after verifying that the approach actually works in the first couple of weeks already. Even better, he can ask the LLM to train him to do the same if he wishes. Learn why things work the way they do, why something doesn't converge, etc.

Assuming that Bob is willing to do all that, he will progress way faster than Alice. LLMs won't take anything away if you are still willing to take the time to understand what it's actually building and why things are done that way.

5 years from now, Alice will be using LLMs just like Bob, or without a job if she refuses to, because the place will be full of Bobs, with or without understanding.

techblueberry 4 hours ago | parent | next [-]

The problem is in most environments Bob won’t spend the rest of the year figuring out what the LLM did, because bob will be busy promoting the LLM for the next deliverable, and the problem is that if all bob has time for us to prompt LLMs, and not understand, there will be a ceiling to Bob’s potential.

This won’t affect everyone equally. Some Bob’s will nerd out and spend their free time learning, but other Bob’s won’t.

therealdrag0 2 hours ago | parent [-]

Why would bob only have time to promote llms? Strange strawman. Many uni courses always had a level of you get out what you put in, it’s the same with LLMs.

techblueberry an hour ago | parent [-]

Why would the university look at the amount of work a student get done, conclude the student can get 12x done because they can do a years work in a month and not make the student do 12x more work?

And it’s not strictly speaking university we’re talking about. The way we understand work is going to fundamentally change. And we’re not going to value the people who use LLMs to get 1x done.

But yes, university was always about how much work you put into it, and LLM’s are going to make that 10x more obvious.

The point is the Bob and Alice comparison is already a straw man, but I do squarely believe it’s the people with the best mental models and not the people who “get AI” who will win the new world. If you’re curious and good at developing mental models, you can learn “AI” in a week. But if you’re curious and good at developing mental models you’ve probably already lapped both Bob and Alice

therealdrag0 5 minutes ago | parent [-]

Honestly I’m not going to review the thread to see if we got our wires crossed at some point, but I agree with your last comment!

Yokohiii 5 hours ago | parent | prev | next [-]

Bob will never figure out there is an error in his paper. If someone tells him, the LLM will have trouble to figure it out as well, remember the LLM inserted the error to make it "look right".

Your perspective is cut off. In the real world Bob is supposed to produce outcomes that work. If he moves on into the industry and keeps producing hallucinated, skewed, manipulated nonsense, then he will fall flat instantly. If he manages to survive unnoticed, he will become CEO. The latter rather unlikely.

doug_durham 2 minutes ago | parent [-]

That's an odd opinion to hold. That's not what real world usage shows is happening.

piiritaja 5 hours ago | parent | prev [-]

"LLMs won't take anything away if you are still willing to take the time to understand what it's actually building"

But do you actually understand it? The article argues exactly against this point - that you cannot understand the problems in the same way when letting agents do the initial work as you would when doing it without agents.

from the article: "you cannot learn physics by watching someone else do it. You have to pick up the pencil. You have to attempt the problem. You have to get it wrong, sit with the wrongness, and figure out where your reasoning broke. Reading the solution manual and nodding along feels like understanding. It is not understanding. Every student who has tried to coast through a problem set by reading the solutions and then bombed the exam knows this in their bones. We have centuries of accumulated pedagogical wisdom telling us that the attempt, including the failed attempt, is where the learning lives. And yet, somehow, when it comes to AI agents, we've collectively decided that maybe this time it's different. That maybe nodding at Claude's output is a substitute for doing the calculation yourself. It isn't. We knew that before LLMs existed. We seem to have forgotten it the moment they became convenient."