Remix.run Logo
aabhay a day ago

This feels like the exactly wrong way to think about it IMO. For me “knowledge” is not the explicit recitation of the correct solution, it’s all the implicit working knowledge I gain from trying different things, having initial assumptions fail, seeing what was off, dealing with deployment headaches, etc. As I work, I carefully pay attention to the outputs of all tools and try to mentally document what paths I didn’t take. That makes dealing with bugs and issues later on a lot easier, but it also expands my awareness of the domain, and checks my hubris on thinking I know something, and makes it possible to reason about the system when doing things later on.

Of course, this kind of interactive deep engagement with a topic is fast becoming obsolete. But the essence to me of “knowing” is about doing and experiencing things, updating my bayesian priors dialectically (to put it fancily)

simonw a day ago | parent | next [-]

I agree that the only reliable way to learn is to put knowledge into practice.

I don't think that's incompatible with getting help from LLMs. I find that LLMs let me try so much more stuff, and at such a faster rate, that my learning pace has accelerated in a material way.

gflarity a day ago | parent [-]

Consider, ever so briefly, that people don't all learn the same. You do you.

simonw a day ago | parent [-]

That's fair.

Something I'm really interested right now is the balance in terms of the struggle required to learn something.

I firmly believe that there are things where the only way to learn how to do them is to go through the struggle. Writing essays for example - I don't think you can shortcut learning to write well by having an LLM do that for you, even though actually learning to write is a painful and tiresome progress.

But programming... I've seen so many people who quit learning to program because the struggle was too much. Those first six months of struggling with missing semicolons are absolutely miserable!

I've spoken to a ton of people over the past year who always wanted to learn to program but never managed to carve out that miserable six months... and now they're building software, because LLMs have shaved down that learning curve.

theLiminator 21 hours ago | parent | next [-]

I think it really depends on how it's used. It's a massive accelerant if it's just helping you stitch stuff together. Or when it helps you get unblocked by quickly finding you the missing api you need.

But when it replaces you struggling through figuring out the mental model of what you're doing, then I think you end up learning at a much more shallow level than you would by doing thing manually.

AmbroseBierce 18 hours ago | parent [-]

And that's a fast lane for security issues a plenty, when you cannot spot them because you don't even understand what each part is supposed to do.

habinero 14 hours ago | parent | prev [-]

That's not learning, that's building. It's like trying to learn how to draw via paint by numbers. Do you end up with something you could hang on the wall? Sure. Could you have fun doing it? Sure. Is there anything wrong with just doing that as a hobby? Of course not.

Is it a substitute for actually learning how to look at objects and break them down into shapes and color and value? No. You gotta put in the work if you want the result. Brains just work like that.

"Struggling with semicolons" isn't any different than drawing a hundred derpy looking faces that look terrible.

Ira Glass has a quote about this:

> “Nobody tells this to people who are beginners, I wish someone told me. All of us who do creative work, we get into it because we have good taste. But there is this gap. For the first couple years you make stuff, it’s just not that good. It’s trying to be good, it has potential, but it’s not. But your taste, the thing that got you into the game, is still killer. And your taste is why your work disappoints you. A lot of people never get past this phase, they quit. Most people I know who do interesting, creative work went through years of this. We know our work doesn’t have this special thing that we want it to have. We all go through this. And if you are just starting out or you are still in this phase, you gotta know its normal and the most important thing you can do is do a lot of work. Put yourself on a deadline so that every week you will finish one story. It is only by going through a volume of work that you will close that gap, and your work will be as good as your ambitions. And I took longer to figure out how to do this than anyone I’ve ever met. It’s gonna take awhile. It’s normal to take awhile. You’ve just gotta fight your way through.”

Herring 13 hours ago | parent | next [-]

I like the sentiment, I really do, but nobody (outside a phd program) pays you to learn. That's just not how society is set up. If FAANG companies could get away with hiring high school kids at min wage to prompt all day they would. We'll figure that out real quick as that exponential rises. If you don't like it, build a better society. While you still can.

dns_snek 12 hours ago | parent [-]

Correction: Nobody wants to pay for you to learn, yet they implicitly do it and rely on it.

If companies decide that professional learning is unnecessary in the age of AI they'll be committing a horrible blunder. Their "fuck around" phase might sting, but missing an entire generation of skilled professionals is going to make our value skyrocket in the "find out" phase, a few years down the line.

simonw 13 hours ago | parent | prev [-]

I love that Ira Glass quote. I've thought about it a lot!

I still think paint by numbers is a valid early step along the path to learning to draw.

mmasu a day ago | parent | prev | next [-]

I remember a very nice quote from an Amazon exec - “there is no compression algorithm for experience”. The LLM might as well do wrong things, and you still won’t know what you don’t know. But then, iterating with LLMs is a different kind of experience; and in the future people will likely do that more than just grinding through the failure of just missing semicolons Simon is describing below. It’s a different paradigm really

visarga 21 hours ago | parent | next [-]

Of course there is - if you write good tests, they compress your validation work, and stand in for your experience. Write tests with AI, but validate their quality and coverage yourself.

I think the whole discussion about coding agent reliability is missing the elephant in the room - it is not vibe coding, but vibe testing. That is when you run the code a few times and say LGTM - the best recipe to shoot yourself in the foot no matter if code was hand written or made with AI. Just put the screw on the agent, let it handle a heavy test harness.

mmasu 21 hours ago | parent [-]

this is a very good point, however the risk of writing bad or non extensive tests is still there if you don’t know what good looks like! The grind will still need to be there, but it will be a different way of gaining experience

DANmode 9 hours ago | parent [-]

Starting to get it!

New skills, not no skills.

There will still be a wide spectrum of people that actually understand the stack - and don’t - and no matter how much easier or harder the tools get, those people aren’t going anywhere.

barrkel 18 hours ago | parent | prev [-]

Compression algorithms for experience are of great interest to ML practitioners and they have some practices that seem to work well. Curriculum learning, feedback from verifiable rewards. Solve problems that escalate in difficulty, are near the boundary of your capability, and ideally have a strong positive or negative feedback on actions sooner rather than later.

johnfn a day ago | parent | prev | next [-]

But how much of that time is truly spent on learning relevant knowledge, and how much of it is just (now) useless errata? Take vector search for an example. Pre-GPT, I would spend like an hour chasing down a typo, like specifying 1023 instead of 1024 or something. This sort of problem is now trivially solved in minutes by a LLM that fully understands the API surface area. So what exactly do I lose by not spending that hour chasing it down? It has nothing to do with learning vector search better, and an LLM can do it better and faster than I can.

extr 21 hours ago | parent [-]

I think people fool themselves with this kind of thing a lot. You debug some issue with your GH actions yaml file for 45 minutes and think you "learned something", but when are you going to run into that specific gotcha again? In reality the only lasting lesson is "sometimes these kinds of yaml files can be finnicky". Which you probably already knew at the outset. There's no personal development in continually bashing your head into the lesson of "sometimes computer systems were set up in ways that are kind of tricky if you haven't seen that exact system before". Who cares. At a certain point there is nothing more to the "lesson". It's just time consuming trial and error kind of gruntwork.

Applejinx 16 hours ago | parent | next [-]

Github Actions, web development, stuff like that, are terrible examples of where not to use AI.

You can't really go to giant piles of technical debt and look to those for places to be human. It's soul-destroying. My concern would be that vibe coding will make those places of soul-less technical debt even deeper and deadlier. There will be nobody there, for generations of cruft. Where once the technical debt was made by committee, now it'll be the ghosts of committees, stirred up by random temperature, only to surface bits of rot that just sink down into the morass again, unfixed.

When 'finicky' is actually an interesting problem, or a challenge, that's one thing. When 'finicky' is just 'twelve committees re-hacked this and then it's been maintained by LLMs for years', there is nothing gained by trying to be human at it.

iwontberude 17 hours ago | parent | prev [-]

I don’t think it foolishness. Through random sampling (troubleshooting problems) you can construct a statistically significant model for understanding the whole of the problem space. Maybe it doesn’t scale linearly with the amount of samples but it’s additive for sure.

jstummbillig 19 hours ago | parent | prev | next [-]

I think is exactly right in principle and practically. The question is what domain knowledge you should improve on to maximize outcome: Will understanding the machine code be the thing that most likely translates to better outcomes? Will building the vector search the hard way be? Or will it be focusing on the thing that you do with the vector search?

At some point things will get hard, as long as the world is. You don't need to concern yourself with any technical layer for that to be true. The less we have to concern ourselves with technicalities, the further that points shifts towards the thing we actually care about.

PessimalDecimal 13 hours ago | parent | prev | next [-]

Forgetting LLMs and coding agents for a second, what OP describes is like watching a Youtube video on how to make a small repair around the house. You can watch that and "know" what needs to be done afterwards. But it is a very different thing to do it yourself.

Ultimately it comes to whether gaining the know how through experience is worth it or not.

viking123 12 hours ago | parent [-]

It's like reading a math book.

grim_io 21 hours ago | parent | prev | next [-]

Trial and error is not how apprenticeship works, for example.

As an apprentice, you get correct and precise enough instructions and you learn from the masters perfection point downwards.

Maybe we have reached a point where we can be the machine's apprentices in some ways.

bulbar 11 hours ago | parent | prev [-]

Take a look at Bloom's taxonomy. It's exactly about what you are talking about.