| ▲ | egl2020 12 hours ago |
| "You can learn anything now. I mean anything." This was true before before LLMs. What's changed is how much work it is to get an "answer". If the LLM hands you that answer, you've foregone learning that you might otherwise have gotten by (painfully) working out the answer yourself. There is a trade-off: getting an answer now versus learning for the future. I recently used an LLM to translate a Linux program to Windows because I wanted the program Right Now and decided that was more important than learning those Windows APIs. But I did give up a learning opportunity. |
|
| ▲ | lich_king 12 hours ago | parent | next [-] |
| I'm conflicted about this. On one hand, I think LLMs make it easier to discover explanations that, at least superficially, superficially "click" for you. Sure, they were available before, but maybe in textbooks you needed to pay for (how quaint), or on websites that appeared on the fifth page of search results. Whatever are the externalities of that, in the short term, that part may be a net positive for learners. On the other hand, learning is doing; if it's not at least a tiny bit hard, it's probably not learning. This is not strictly an LLM problem; it's the same issue I have with YouTube educators. You can watch dazzling visualizations of problems in mathematics or physics, and it feels like you're learning, but you're probably not walking away from that any wiser because you have not flexed any problem-solving muscles and have not built that muscle memory. I had multiple interactions like that. Someone asked an LLM for an ELI5 and tried to leverage that in a conversation, and... the abstraction they came back feels profound to them, but is useless and wrong. |
| |
| ▲ | amoorthy 9 hours ago | parent | next [-] | | This. I feel this all the time. I love 3Blue1Brown's videos and when I watch them I feel like I really get a concept. But I don't retain it as well as I do things I learned in school. It's possible my brain is not as elastic now in my 40s. Or maybe there's no substitute for doing something yourself (practice problems) and that's the missing part. | |
| ▲ | mvaliente2001 10 hours ago | parent | prev | next [-] | | One factor in favor of the use of LLM as a learning tool is the poor quality of documentation. It seems we've forgotten how to write usable explanations that help readers to build a coherent model of the topic at hand. | |
| ▲ | ValentineC 10 hours ago | parent | prev | next [-] | | > On one hand, I think LLMs make it easier to discover explanations that, at least superficially, superficially "click" for you. The other benefit is that LLMs, for superficial topics, are the most patient teachers ever. I can ask it to explain a concept multiple times, hoping that it'll eventually click for me, and not be worried that I'd look stupid, or that it'll be annoyed or lose patience. | |
| ▲ | DoingIsLearning 9 hours ago | parent | prev [-] | | > learning is doing; I could not agree more. |
|
|
| ▲ | _doctor_love 12 hours ago | parent | prev | next [-] |
| It always comes down to economics and then the person and their attitude towards themselves. Some things are worth learning deeply, in other cases the easy / fast solution is what the situation calls for. I've thought recently that some kinds of 'learning' with AI are not really that different from using Cliffs Notes back in the day. Sometimes getting the Cliffs Notes summary was the way to get a paper done OR a way to quickly get through a boring/challenging book (Scarlet Letter, amirite?). And in some cases reading the summary is actually better than the book itself. BUT - I think everyone could agree that if you ONLY read Cliffs Notes, you're just cheating yourself out of an education. That's a different and deeper issue because some people simply do not care to invest in themselves. They want to do minimum work for maximum money and then go "enjoy themselves." Getting a person to take an interest in themselves, in their own growth and development, to invite curiosity, that's a timeless problem. |
| |
| ▲ | andai 11 hours ago | parent | next [-] | | So I've actually been putting more effort into deliberate practice since I started using AI in programming. I've been a fan of Zed Shaw's method for years, of typing out interesting programs by hand. But I've been appreciating it even more now, as a way to stave off the feeling of my brain melting :) The gross feeling I have if I go for too long without doing cardio, is a similar feeling to when I go for too long without actually writing a substantial amount of code myself. I think that the feeling of making a sustained effort is itself something necessary and healthy, and rapidly disappearing from the world. | | | |
| ▲ | skydhash 11 hours ago | parent | prev [-] | | I’ve always like the essential/accidental complexity split. It can be hard to find, but for a problem solving perspective, it may defines what’s fun and what’s a chore. I’ve been reading the OpenBSD lately and it’s quite nice how they’ve split the general OS concepts from the machine dependent needs. And the general way they’ve separated interfaces and implementation. I believe that once you’ve solve the essential problem, the rest becomes way easier as you got a direction. But doing accidental problem solving without having done the essential one is pure misery. |
|
|
| ▲ | scott_s 11 hours ago | parent | prev | next [-] |
| That's not what the author means. Multiple times a day, I have conversations with LLMs about specific code or general technologies. It is very similar to having the same conversation with a colleague. Yes, the LLM may be wrong. Which is why I'm constantly looking at the code myself to see if the explanation makes sense, or finding external docs to see if the concepts check out. Importantly, the LLM is not writing code for me. It's explaining things, and I'm coming away with verifiable facts and conceptual frameworks I can apply to my work. |
| |
| ▲ | phil21 11 hours ago | parent | next [-] | | Yeah, it's a great way for me to reduce activation energy to get started on a specific topic. Certainly doesn't get me all the way home, but cracks it open enough to get started. | |
| ▲ | bee_rider 11 hours ago | parent | prev [-] | | I kinda wonder to what extent grad students’ experience grading projects and homework will end up being a differentiating skill. 75% kidding. |
|
|
| ▲ | wcfrobert 12 hours ago | parent | prev | next [-] |
| My solution to this is to prioritize. There isn't enough time in a person's life to learn everything anyways. Selectively pick and struggle through things you want to learn deeply. And let AI spoon-feed you for things you don't care as much about. |
| |
| ▲ | sp1nningaway 11 hours ago | parent [-] | | I've managed to go my whole career using regex and never fully grokking it, and now I finally feel free to never learn! I've also wanted to play with C and Raylib for a long time and now I'm confident in coding by hand and struggling with it, I just use LLMs as a backstop for when I get frustrated, like a TA during lab hours. | | |
| ▲ | andoando 10 hours ago | parent [-] | | Same there is a few things I never learned and don't care to learn and ultimately it has no greater value to learn. Like do I really get anything out of learning another framework or how some particular library does something? | | |
| ▲ | insin 8 hours ago | parent [-] | | If you're going to deploy what you make with them to production without accidentally blowing your feet off, 100%, be they RegExp or useEffect(), if you can't even tell which way the gun is pointing how are you supposed to know which way the LLM has oriented it? Picking useEffect() as my second example because it took down CloudFlare, and if you see one with a tell-tale LLM comment attached to it in a PR from your coworkers who are now _never_ going to learn how it works, you can almost be certain it's either unnecessary or buggy. | | |
| ▲ | andoando 7 hours ago | parent [-] | | For things Im working on seriously for my work, for sure, I spend time understanding them, and LLMs help with that. I suppose, also having experience Im already prone to asking questions about things I have a suspicion can go wrong But there is also a ton of times something isnt at all important to me and I dont want to waste 3 hours on |
|
|
|
|
|
| ▲ | twodave 12 hours ago | parent | prev | next [-] |
| I am beginning to disagree with this, or at least I am beginning to question its universal truth. For instance, there are so many times when "learning" is an exercise at attempting to apply wrong advice many times until something finally succeeds. For instance, retrieving the absolute path an Angular app is running at in a way that is safe both on the client and in SSR contexts has a very clear answer, but there are a myriad of wrong ways people accomplish that task before they stumble upon the Location injectable. In cases like the above, the LLM is often able to tell you not only the correct answer the first time (which means a lot less "noise" in the process trying to teach you wrong things) but also is often able to explain how the answer applies in a way that teaches me something I'd never have learned otherwise. We have spent the last 3 decades refining what it means to "learn" into buckets that held a lot of truth as long as the search engine was our interface to learning (and before that, reading textbooks). Some of this rhetoric begins to sound like "seniority" at a union job or some similar form of gatekeeping. That said, there are also absolutely times (and sometimes it's not always clear that a particular example is one of those times!!) when learning something the "long" way builds our long term/muscle memory or expands our understanding in a valuable way. And this is where using LLMs is still a difficult choice for me. I think it's less difficult a choice for those with more experience, since we can more confidently distinguish between the two, but I no longer think learning/accomplishing things via the LLM is always a self-damaging route. |
|
| ▲ | colecut 9 hours ago | parent | prev | next [-] |
| AI gave you the option of making it happen without learning anything. It also gives you an avenue to accelerate your learning if that is your goal. |
|
| ▲ | mgraczyk 10 hours ago | parent | prev | next [-] |
| I learn a lot faster now with LLMs. You could learn the windows APIs much faster if you wanted to learn them |
| |
| ▲ | cmiles74 9 hours ago | parent | next [-] | | Is this maybe more about the quality of the documentation? I say this 'cause my thinking is that reading is reading, it takes the same time to read the information. | |
| ▲ | 20k 10 hours ago | parent | prev [-] | | How is this faster than just reading the documentation? Given that LLMs hallucinate, you have to double check everything it says against the docs anyway | | |
| ▲ | subscribed 9 hours ago | parent | next [-] | | I learn fastest from the examples, from application of the skill/knowledge - with explanations. AIs allowed me to get on with Python MUCH faster than I was doing myself, and understand more of the arcane secrets of jq in 6 months than I was able in few years before. And AIs mistakes are brilliant opportunity to debug, to analyse, and to go back to it saying "I beg you pardon, wth is this" :) pointing at the elementary mistakes you now see because you understand the flow better. Recently I had a fantastic back and forth with Claude and one of my precious tools written in python - I was trying to understand the specifics of the particular function's behaviour, discussing typing, arguing about trade-offs and portability. The thing I really like in it that I always get a pushback or things to consider if I come up with something stupid. It's a tailored team exercise and I'm enjoying it. | |
| ▲ | simonw 9 hours ago | parent | prev | next [-] | | Human teachers make mistakes too. If you aren't consuming information with a skeptical eye you're not learning as effectively as you could be no matter what the source is. The trick to learning with LLMs is to treat them as one of multiple sources of information, and work with those sources to build your own robust mental of how things work. If you exclusively rely on official documentation you'll miss out on things that the documentation doesn't cover. | | |
| ▲ | 20k 9 hours ago | parent [-] | | If I have to treat LLMs as a fallible source of information, why wouldn't I just go right to the source though? Having an extra step in between me and the actual truth seems pointless WinAPI docs are pretty accurate and up to date | | |
| ▲ | simonw 9 hours ago | parent | next [-] | | Because it's faster. If the WinAPI docs are solid you can do things like copy and paste pages of them into Claude and ask a question, rather then manually scan through them looking for the answer yourself. Apple's developer documentation is mostly awful - try finding out how to use the sips or sandbox-exec CLI tools for example. LLMs have unlocked those for me. | | |
| ▲ | 20k 8 hours ago | parent [-] | | But you have to check the answer against the documentation anyway though, to validate that its actually correct! Unless you're just taking the LLM answers at face value? | | |
| ▲ | simonw 7 hours ago | parent [-] | | For most code stuff you don't check the answer against the documentation - you write the code and run it and see if it works. That's always a better signal than anything that official documentation might tell you. | | |
| ▲ | 20k 6 hours ago | parent [-] | | That seems like a strong error, you have no idea if it works or if it just happens to work | | |
| ▲ | simonw 6 hours ago | parent [-] | | If you're good at programming you can usually tell exactly why it worked or didn't work. That's how we've all worked before coding agents came along too - you don't blindly assume the snippet you pasted off StackOverflow will work, you try it and poke at it and use it to build a firm mental model of whether it's the right thing or not. | | |
| ▲ | 20k 5 hours ago | parent [-] | | Sure. A big part of how I'd know that the function I'm calling does what I think it does, is by reading the source documentation associated with it Does it have any threading preconditions? Any weird quirks? Any strange UB? That's stuff you can't find out just by testing. You can ask the LLM, but then you have to read the docs anyway to check its answer | | |
| ▲ | simonw 4 hours ago | parent [-] | | I envy you for the universally high quality of documentation that the code you are working with has! |
|
|
|
|
|
| |
| ▲ | mgraczyk 9 hours ago | parent | prev [-] | | Because it will take you years to read all the information you can get funneled through an LLM in a day | | |
| ▲ | 20k 8 hours ago | parent [-] | | Except you have no idea if what the LLM is telling you is true I do a lot of astrophysics. Universally LLMs are wrong about nearly every astrophysics questions I've asked them - even the basic ones, in every model I've ever tested. Its terrifying that people take these at face value For research at a PhD level, they have absolutely no idea what's going on. They just make up plausible sounding rubbish | | |
| ▲ | cdetrio 6 hours ago | parent | next [-] | | Astrophysicist David Kipping had a podcast episode a month ago reporting that LLMs are working shockingly well for him, as well as for the faculty at the IAS.[1] It's curious how different people come to very different conclusions about the usefulness of LLMs. https://youtu.be/PctlBxRh0p4 | | |
| ▲ | 20k 6 hours ago | parent [-] | | The problem with these long videos is that what I really want to see is what questions were asked of it, and the accuracy of the results Every time I ask LLMs questions I know the answers to, its results are incomplete, inaccurate, or just flat out wrong much of the time The idea that AI is an order of magnitude superior to coders is flat out wrong as well. I don't know who he's talking to |
| |
| ▲ | mgraczyk 7 hours ago | parent | prev [-] | | Somehow we went from writing software apps and reading API docs to research level astrophysics Sure it's not there yet. Give it a few months | | |
| ▲ | 20k 6 hours ago | parent [-] | | It doesn't even work for basic astrophysics I asked chatgpt the other day: "Where did elements heavier than iron come from?" The answer it gave was totally wrong. Its not a hard question. I asked it this question again today, and some of it was right (!). This is such a low bar for basic questions |
|
|
|
|
| |
| ▲ | mgraczyk 9 hours ago | parent | prev [-] | | Yes you have to be careful, but the LLM will read and process core and documentation literally millions of times faster than you, so it's worth it | | |
| ▲ | 20k 8 hours ago | parent | next [-] | | I mean, is it really that hard to find information in the docs? Like, if I want to find out what, I don't know, "GetQueuedCompletionStatus" does. I google GetQueuedCompletionStatus Find this page: https://learn.microsoft.com/en-us/windows/win32/api/ioapiset... Bam, that's the single source of truth right there. Microsoft's docs are pretty great If I use an LLM, I have to ask it for the documentation about "GetQueuedCompletionStatus". Then I have to double check its output, because LLMs hallaucinate Doubly checking its output involves googling "GetQueuedCompletionStatus", finding this page: https://learn.microsoft.com/en-us/windows/win32/api/ioapiset... And then reading the docs to validate whether or not what its told me is correct. How does this save me any time? | | |
| ▲ | mgraczyk 8 hours ago | parent [-] | | How about we do the following. I have not done win32 programming in 12 years. Maybe you've done it more recently. I'll use an LLM and you look up things manually. We can see, who can build a win32 admin UI that shows a realtime view of every open file by process with sorting, filtering and search on both the files and process/command names. I estimate this will take me 5 minutes
Would you like to race? | | |
| ▲ | 20k 6 hours ago | parent [-] | | This mentality is fundamentally why I think AI is not that useful, it completely underscores everything that's wrong with software engineering and what makes a very poor quality senior developer I'll write an application without AI that has to be maintained for 5 years with an ever evolving featureset, and you can write your own with AI, and see which codebase is easiest to maintain, the most productive to add new features to, and has the fewest bugs and best performance | | |
| ▲ | mgraczyk 6 hours ago | parent [-] | | Sure let's do it. I am pretty confident mine will be more maintainable, because I am an extremely good software engineer, AI is a powerful tool, and I use AI very effectively I would literally claim that with AI I can work faster and produce higher quality output than any other software engineer who is not using AI. Soon that will be true for all software engineers using AI. | | |
| ▲ | 20k 6 hours ago | parent [-] | | I'm curious, have you ever worked on a single software project for more than 5 years? |
|
|
|
| |
| ▲ | skydhash 8 hours ago | parent | prev [-] | | Why does it matter? We have table of contents, index and references for books and other contents. That’s a lot of navigational aid. Also they help in providing you a general overview of the domain. |
|
|
|
|
| ▲ | dieselgate 9 hours ago | parent | prev | next [-] |
| Reminds some of something a friend said towards the end of college: “it’s only like 12 thousand dollars a year to learn everything there is to know” Take it with a grain of salt.. |
| |
|
| ▲ | esafak 11 hours ago | parent | prev | next [-] |
| It is uncertain what will be valuable in the future at the rate things are changing. |
|
| ▲ | tsunamifury 12 hours ago | parent | prev | next [-] |
| Books are for the mentally enfeebled who can't memorize knowledge. - Socrates |
| |
| ▲ | aozgaa 12 hours ago | parent | next [-] | | I can’t tell if this is a genuine quote or not. Can you provide a citation? (I think something like this comes up in the Phaedrus) | |
| ▲ | nightski 12 hours ago | parent | prev | next [-] | | Aren't books to communicate knowledge? | |
| ▲ | goatlover 12 hours ago | parent | prev | next [-] | | Written by Plato. | |
| ▲ | sdf2df 12 hours ago | parent | prev [-] | | Wrong person you're quoting but he did not foresee the benefit of leveraging the work of others to extend and build-on-top. |
|
|
| ▲ | doctorpangloss 10 hours ago | parent | prev | next [-] |
| I don't know, most shit I learned programming (and subsequently get paid for) is meaningless arcana. For example, Kubernetes. And for you, it's Windows APIs. For programming in general, most learning is worthless. This is where I disagree with you. If you belong to a certain set of cultures, you overindex on this idea that math (for example) is the best way to solve problems, that you must learn all this stuff by this certain pedagogy, and that the people who are best at this are the best at solving problems, which of course is not true. This is why we have politics, and why we have great politicians who hail from cultures that are underrepresented in high levels of math study, because getting elected and having popular ideas and convincing people is the best way to solve way more problems people actually have than math. This isn't to say that procedural thinking isn't valuable. It's just that, well, jokes on you. ChatGPT will lose elections. But you can have it do procedural thinking pretty well, and what does the learning and economic order look like now? I reject this form of generalization, but there is tremendous schadenfreude about, well the math people are destroying their own relevance. All that said, my actual expertise, people don't pay for. Nobody pays for good game design or art direction (my field). They pay because you know Unity and they don't. They can't tell (and do not pay for) the difference between a good and bad game. Another way of stating this for the average CRUD developer is, most enterprise IT projects fail, so yeah, the learning didn't really matter anyway. It's not useful to learn how to deliver better failed enterprise IT project, other than to make money. One more POV: the effortlessness of agentic programming makes me more sympathetic to anti intellectualism. Most people do not want to learn anything, including people at fancy colleges, including your bosses and your customers, though many fewer in the academic category than say in the corporate world. If you told me, a chatbot could achieve in hours what would take a world expert days or weeks, I would wisely spend more time playing with my kids and just wait. The waiters are winning. Even in game development (cultural product development generally). It's better to wait for these tools to get more powerful than to learn meaningless arcana. |
| |
| ▲ | drivebyhooting 10 hours ago | parent [-] | | Convincing / coercing a bunch of slaves to build a pyramid takes a leader. But no amount of politics and charisma will calculate the motions of the planets or put satellites in orbit. A nation needs more than just influencers and charlatans. | | |
| ▲ | doctorpangloss 10 hours ago | parent [-] | | > But no amount of politics and charisma will calculate the motions of the planets or put satellites in orbit. the government invented computers. you need politics to fund all of this. you are talking about triumphs of politics as much as invention. i don't know why you think i am pro influencer or charlatan... |
|
|
|
| ▲ | aspenmartin 12 hours ago | parent | prev [-] |
| I do disagree with the notion that you have to slog through a problem to learn efficiently. That it's either "the easy way [bad, you dont learn] or the hard way [good you do learn]" is a false dichotomy. Agents / LLMs are like having an always-on, highly adept teacher who can synthesize information in an intuitive way, and that you can explore a topic with. That's extremely efficient and effective for learning. There is maybe a tradeoff somewhat in some things, but this idea that LLMs make you not learn doesn't feel right; they allow you to learn _as much as you want and about the things that you want_, which wasn't before. You had to learn, inefficiently(!), a bunch of crap you didn't want to in order to learn the thing you _did_ want to. I will not miss those days. |
| |
| ▲ | tayo42 12 hours ago | parent [-] | | I don't think your saying the same thing. Ai can help you get through the hard stuff effeciently and you'll learn. It acts as a guide, but you still do the work. Offloading completely the hard work and just getting a summary isn't really learning. |
|