| ▲ | mbirth 3 days ago |
| The difference is that after you’ve googled it for ½ hour, you’ve learned something. If you ask an LLM to do it for you, you’re none the wiser. |
|
| ▲ | qudat 3 days ago | parent | next [-] |
| Wrong. I will spend 30 minutes having the LLM explain every line of code and why it's important, with context-specific follow-up questions. An LLM is one of the best ways to learn ... |
| |
| ▲ | Akronymus 2 days ago | parent | next [-] | | So far, eqch and every time I used an LLM to help me with something it hallucinated non-existant functions or was incorrect in an important but non-obvious way. Though, I guess I do treat LLM's as a last resort longshot for when other documentation is failing me. | | |
| ▲ | naasking 2 days ago | parent | next [-] | | Knowing how to use LLMs is a skill. Just winging it without any practice or exploration of how the tool fails can produce poor results. | | |
| ▲ | b112 2 days ago | parent [-] | | "You're holding it wrong" 99% of an LLM's usefulness vanishes, if it behaves like an addled old man. "What's that sonny? But you said you wanted that!" "Wait, we did that last week? Sorry let me look at this again" "What? What do you mean, we already did this part?!" | | |
| ▲ | naasking 2 days ago | parent [-] | | Wrong mental model. Addled old men can't write code 1000x faster than any human. | | |
| ▲ | b112 2 days ago | parent [-] | | I'd prefer 1x "wrong stuff" than wrong stuff blasted 1000x. How is that helpful? Further, they can't write code that fast, because you have to spend 1000x explaining it to them. | | |
| ▲ | naasking a day ago | parent [-] | | Except it's not 1000x wrong stuff, that's the point. But don't worry, the Amish are welcoming of new luddites! |
|
|
|
| |
| ▲ | oblio 2 days ago | parent | prev [-] | | Which LLMs have you tried? Claude Code seems to be decent at not hallucinating, Gemini CLI is more eager. I don't think current LLMs take you all the way but a powerful code generator is a useful think, just assemble guardrails and keep an eye on it. | | |
| ▲ | Akronymus 2 days ago | parent [-] | | Mostly chatgpt because I see 0 value in paying for any llm, nor do I wish to gice up my data to any llm provider | | |
| ▲ | Anamon 2 days ago | parent | next [-] | | Speaking as someone who doesn't really like or do LLM-assisted coding either: at least try Gemini. ChatGPT is the absolute worst you could use. I was quite shocked when I compared the two on the same tasks. Gemini gets decent initial results you can build on. ChatGPT generates 99% absolutely unusable rubbish. The difference is so extreme, it's not even a competition anymore. I now understand why Altman announced "Code Red" at OpenAI. If their tools don't catch up drastically, and fast, they'll be one for the history books soon. Wouldn't be the first time the big, central early mover in a new market suddenly disappears, steamrolled by the later entrants. | |
| ▲ | oblio 2 days ago | parent | prev [-] | | They work better with project context and access to tools, so yeah, the web interface is not their best foot forward. That doesn't mean the agents are amazing, but they can be useful. | | |
| ▲ | Akronymus 2 days ago | parent [-] | | A simple "how do I access x in y framework in the intended way" shouldnt require any more context. instead of telling me about z option it keeps hallucinating something that doesnt exist and even says its in the docs when it isnt. Literally just wasting my time | | |
| ▲ | oblio 2 days ago | parent [-] | | I was in the same camp until a few months ago. I now think they're valid tools, like compilers. Not in the sense that everyone compares them (compilers made asm development a minuscule niche of development). But in the sense that even today many people don't use compilers or static analysis tools. But that world is slowly shrinking. Same for LLMs, the non LLM world will probably shrink. You might be able to have a long and successful career without touching them for code development. Personally I'd rather check them out since tools are just tools. |
|
|
|
|
| |
| ▲ | _ikke_ 3 days ago | parent | prev [-] | | As long as what it says is reliable and not made up. | | |
| ▲ | qudat 2 days ago | parent | next [-] | | That's true for internet searching. How many times have you gone to SO, seen a confident answer, tried it, and it failed to do what you needed? | | |
| ▲ | Anamon 2 days ago | parent [-] | | Then you write a comment, maybe even figure out the correct solution and fix the answer. If you're lucky, somebody already did. Everybody wins. That's what LLMs take away. Nothing is given back to the community, nothing is added to shared knowledge, no differing opinions are exchanged. It just steals other people's work from a time when work was still shared and discussed, removes any indication of its source, claims it's a new thing, and gives you no way to contribute back, or even discuss it and maybe get confronted with different opinions of even discovering a better way. Let's not forget that one of the main reasons why LLMs are useful for coding in the first place, is that they scraped SO from the time where people still used it. |
| |
| ▲ | anakaine 3 days ago | parent | prev [-] | | I feel like we are just covering whataboutism tropes now. You can absolutely learn from an LLM. Sometimes.documentation sucks and the LLM has learned how to put stuff together feom examples found in unusual places, and it works, and shows what the documentation failed to demonstrate. And with the people above, I agree - sometimes the fun is in the end process, and sometimes it is just filling in the complexity we do not have time or capacity to grab. I for one just cannot keep up with front end development. Its an insurmountable nightmare of epic proportions. Im pretty skilled at my back end deep dive data and connecting APIs, however. So - AI to help put together a coherent interface over my connectors, and off we go for my side project. It doesnt need to be SOC2 compliant and OWASP proof, nor does it need ISO27001 compliance testing, because after all this is just for fun, for me. |
|
|
|
| ▲ | hyperadvanced 3 days ago | parent | prev | next [-] |
| You can study the LLM output. In the “before times” I’d just clone a random git repo, use a template, or copy and paste stuff together to get the initial version working. |
| |
| ▲ | inferiorhuman 3 days ago | parent [-] | | Studying gibberish doesn't teach you anything. If you were cargo culting shit before AI you weren't learning anything then either. | | |
| ▲ | ben_w 3 days ago | parent | next [-] | | Necessarily, LLM output that works isn't gibberish. The code that LLM outputs, has worked well enough to learn from since the initial launch of ChatGPT. This even though back then you might have to repeatedly say "continue" because it would stop in the middle of writing a function. | | |
| ▲ | inferiorhuman 2 days ago | parent [-] | | Necessarily, LLM output that works isn't gibberish.
Hardly. Poorly conjured up code can still work. | | |
| ▲ | ben_w 2 days ago | parent [-] | | "Gibberish" code is necessary code which doesn't work. Even in the broader use of the term: https://en.wikipedia.org/wiki/Gibberish Especially in this context, if a mystery box solves a problem for me, I can look at the solution and learn something from that solution, c.f. how paper was inspired by watching wasps at work. Even the abject failures can be interesting, though I find them more helpful for forcing my writing to be easier to understand. |
|
| |
| ▲ | oblio 2 days ago | parent | prev [-] | | It's not gibberish. More than that, LLMs frequently write comments (some are fluff but some explain the reasoning quite well), variables are frequently named better than cdx, hgv, ti, stuff like that, plus looking at the reasoning while it's happening provides more clues. Also, it's actually fun watching LLMs debug. Since they're reasonably similar to devs while investigating, but they have a data bank the size of the internet so they can pull hints that sometimes surprise even experienced devs. I think hard earned knowledge coming from actual coding is still useful to stay sharp but it might turn out the balance is something like 25% handmade - 75% LLM made. | | |
| ▲ | inferiorhuman 2 days ago | parent [-] | | they have a data bank the size of the internet so they can
pull hints that sometimes surprise even experienced devs.
That's a polite way of phrasing "they've stolen a mountain of information and overwhelmed resources that humans would use to other find answers." I just discovered another victim: the Renesas forums. Cloudflare is blocking me from accessing the site completely, the only site I've ever had this happen to. But I'm glad you're able to have your fun. it might turn out the balance is something like 25% handmade - 75% LLM made.
Doubtful. As the arms race continues AI DDoS bots will have less and less recent "training" material. Not a day goes by that I don't discover another site employing anti-AI bot software. | | |
| ▲ | ben_w 2 days ago | parent | next [-] | | > they've stolen a mountain of information In law, training is not itself theft. Pirating books for any reason including training is still a copyright violation, but the judges ruled specifically that the training on data lawfully obtained was not itself an offence. Cloudfare has to block so many more bots now precisely because crawling the public, free-to-everyone, internet is legally not theft. (And indeed would struggle to be, given all search engines have for a long time been doing just that). > As the arms race continues AI DDoS bots will have less and less recent "training" material My experience as a human is that humans keep re-inventing the wheel, and if they instead re-read the solutions from even just 5 years earlier (or 10, or 15, or 20…) we'd have simpler code and tools that did all we wanted already. For example, "making a UI" peaked sometime between the late 90s and mid 2010s with WYSIWYG tools like Visual Basic (and the mac equivalent now known as Xojo) and Dreamweaver, and then in the final part of that a few good years where Interface Builder finally wasn't sucking on Xcode. And then everyone on the web went for React and Apple made SwiftUI with a preview mode that kept crashing. If LLMs had come before reactive UI, we'd have non-reactive alternatives that would probably suck less than all the weird things I keep seeing from reactive UIs. | | |
| ▲ | Anamon 2 days ago | parent [-] | | > Cloudfare has to block so many more bots now precisely because crawling the public, free-to-everyone, internet is legally not theft. That is simply not true. Freely available on the web doesn't mean it's in the Public Domain. The "lawfully obtained" part of your argument is patently untrue. You can legally obtain something, but that doesn't mean any use of it is automatically legal as well. Otherwise, the recent Spotify dump by Anna's Archive would be legal as well. It all depends on the license the thing is released under, chosen by the person who made it freely accessible on the web. This license is still very emphatically a legally binding document that restricts what someone can do with it. For instance, since the advent of LLM crawling, I've added the "No Derivatives" clause to the CC license of anything new I publish to the web. It's still freely accessible, can be shared on, etc., but it explicitly prohibits using it for training ML models. I even add an additional clause to that effect, should the legal interpretation of CC-ND ever change. In short, anyone training an LLM on my content is infringing my rights, period. | | |
| ▲ | ben_w 2 days ago | parent [-] | | > Freely available on the web doesn't mean it's in the Public Domain. Doesn't need to be. > The "lawfully obtained" part of your argument is patently untrue. You can legally obtain something, but that doesn't mean any use of it is automatically legal as well. I didn't say "any" use, I said this specific use. Here's the quote from the judge who decided this: 5. OVERALL ANALYSIS.
After the four factors and any others deemed relevant are “explored, [ ] the results [are] weighed together, in light of the purposes of copyright.” Campbell, 510 U.S. at 578. The copies used to train specific LLMs were justified as a fair use. Every factor but the nature of the copyrighted work favors this result. The technology at issue was among the most transformative many of us will see in our lifetimes.
- https://storage.courtlistener.com/recap/gov.uscourts.cand.43...> Otherwise, the recent Spotify dump by Anna's Archive would be legal as well. I specifically said copyright infringement was separate. Because, guess what, so did the judge the next paragraph but one from the quote I just gave you. > For instance, since the advent of LLM crawling, I've added the "No Derivatives" clause to the CC license of anything new I publish to the web. It's still freely accessible, can be shared on, etc., but it explicitly prohibits using it for training ML models. I even add an additional clause to that effect, should the legal interpretation of CC-ND ever change. In short, anyone training an LLM on my content is infringing my rights, period. It will be interesting to see if that holds up in future court cases. I wouldn't bank on it if I was you. |
|
| |
| ▲ | oblio 2 days ago | parent | prev [-] | | > That's a polite way of phrasing "they've stolen a mountain of information and overwhelmed resources that humans would use to other find answers." Yes, but I can't stop them, can you? > But I'm glad you're able to have your fun. Unfortunately I have to be practical. > Doubtful. As the arms race continues AI DDoS bots will have less and less recent "training" material. Not a day goes by that I don't discover another site employing anti-AI bot software. Almost all these BigCos are using their internal code bases as material for their own LLMs. They're also increasingly instructing their devs to code primarily using LLMs. The hope that they'll run out of relevant material is slim. Oh, and at this point it's less about the core/kernel/LLMs than it is about building ol' fashioned procedural tooling aka code around the LLM, so that it can just REPL like a human. Turns out a lot of regular coding and debugging is what a machine would do, READ-EVAL-PRINT. I have no idea how far they're going to go, but the current iteration of Claude Code can generate average or better code, which is an improvement in many places. | | |
| ▲ | inferiorhuman 2 days ago | parent [-] | | The hope that they'll run out of relevant material is slim.
If big corps are training their LLMs on their LLM written code… | | |
| ▲ | oblio 2 days ago | parent [-] | | You're almost there: > If big corps are training their LLMs on their LLM written code <<and human reviewed code>>… The last part is important. | | |
|
|
|
|
|
|
|
| ▲ | dcre 2 days ago | parent | prev | next [-] |
| This is just not true. I have wasted many hours looking for answers to hard-to-phrase questions and learned very little from the process. If an LLM can get me the same result in 30 seconds, it's very hard for me to see that as a bad thing. It just means I can spend more time thinking about the thing I want to be thinking about. I think to some extent people are valorizing suffering itself. |
| |
| ▲ | codr7 2 days ago | parent [-] | | Learning means friction, it's not going to happen any other way. | | |
| ▲ | onemoresoop 2 days ago | parent | next [-] | | Some of it is friction, some of it is play. With AI you can get faster to the play part where you do learn a fair bit. But in a sense I agree that less is retained. I think that is not because of lack of friciton, instead is the fast pace of getting what you want now. You no longer need to make a conscious effort to remember any of it because it's effortless to get it again with AI if you ever need it. If that's what you mean by friction then I agree. | |
| ▲ | dcre 2 days ago | parent | prev | next [-] | | I agree! I just don’t think the friction has to come from tediously trawling through a bunch of web pages that don’t contain the answer to your question. | |
| ▲ | mountain_peak 2 days ago | parent | prev [-] | | "What an LLM is to me is the most remarkable tool that we've ever come up with, and it's the equivalent of a e-bike for our minds" | | |
| ▲ | codr7 a day ago | parent [-] | | Which is about as useful as a bike for our airplanes. |
|
|
|
|
| ▲ | CamperBob2 3 days ago | parent | prev | next [-] |
| I don't want to waste time learning how to install and configure ephemeral tools that will be obsolete before I ever need to use them again. |
| |
| ▲ | ggggffggggg 3 days ago | parent | next [-] | | Exactly, the whole point is it wouldn’t take 30 minutes (more like 3 hours) if the tooling didn’t change all the fucking time. And if the ecosystem wasn’t a house of cards 8 layers of json configuration tall. Instead you’d learn it, remember it, and it would be useful next time. But it’s not. | |
| ▲ | Akronymus 2 days ago | parent | prev [-] | | And I don't want to use tools I don't understand at least to some degree. I always get nervous when I do something but don't know why I do that something | | |
| ▲ | xnx 2 days ago | parent [-] | | Depends on what level of abstraction you're comfortable with. I have no problem driving a car I didn't build. | | |
| ▲ | Akronymus 2 days ago | parent [-] | | I didnt build my car either. But I understand a bit of most of the main mechanics, like how the ABS works, how powered steering does, how an ICE works and so on. |
|
|
|
|
| ▲ | qualifck 2 days ago | parent | prev | next [-] |
| Not necessarily. The end result of googling a problem might be copying a working piece of code off of stack exchange etc. without putting any work into understanding it. Some people will try to vibe out everything with LLMs, but other people will use them to help engage with their coding more directly and better understand what's happening, not do worse. |
|
| ▲ | 3 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | spankibalt 3 days ago | parent | prev | next [-] |
| I don't think "learning" is a goal here... |
|
| ▲ | ragequittah a day ago | parent | prev | next [-] |
| Usually The thing you've learned after googling for half an hour is mostly that google isn't very useful for search anymore. |
|
| ▲ | enraged_camel 3 days ago | parent | prev | next [-] |
| >> The difference is that after you’ve googled it for ½ hour, you’ve learned something. I've been programming for 15+ years, and I think I've forgotten the overwhelming majority of the things I've googled. Hell, I can barely remember the things I've googled yesterday. |
| |
| ▲ | rob74 3 days ago | parent [-] | | Additionally, in the good/bad old days of using StackOverflow, maybe 10% of the answers actually explained how that thing you wanted to do actually worked, the rest just dumped some code on you and left you to figure it out by yourself, or more likely just copy & paste it and be happy when it worked (if you were lucky)... |
|
|
| ▲ | ajmurmann 2 days ago | parent | prev | next [-] |
| I don't think I'll learn anything by yet again implementing authentication, password reset, forgotten password, etc. |
|
| ▲ | fleroviumna 2 days ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | visarga 3 days ago | parent | prev [-] |
| Why train to pedal fast when we already got motorcycles? You are preparing for yesterday's needs. There will never be a time when we need to solve this manually like it's 2019. Even in 2019 we would probably have used Google, solving was already based on extensive web resources. While in 1995 you would really have needed to do it manually. Instead of manual coding training your time is better invested in learning to channel coding agents, how to test code to our satisfaction, how to know if what AI did was any good. That is what we need to train to do. Testing without manual review, because manual review is just vibes, while tests are hard. If we treat AI-generated code like human code that requires a line-by-line peer review, we are just walking the motorcycle. How do we automate our human in the loop vibe reactions? |
| |
| ▲ | oblio 2 days ago | parent | next [-] | | > Why train to pedal fast when we already got motorcycles? You are preparing for yesterday's needs. This is funny in the sense that in properly built urban environment bycicles are one of the best ways to add some physical activity in a time constrained schedule, as we're discovering. | |
| ▲ | philipwhiuk 2 days ago | parent | prev | next [-] | | > Instead of manual coding training your time is better invested in learning to channel coding agents All channelling is broken when the model is updated. Being knowledgeable about the foibles of a particular model release is a waste of time. > how to test code to our satisfaction Sure testing has value. > how to know if what AI did was any good This is what code review is for. > Testing without manual review, because manual review is just vibes Calling manual review vibes is utterly ridiculous. It's not vibes to point out an O(n!) structure. It's not vibes to point out missing cases. If your code reviews are 'vibes', you're bad at code review > If we treat AI-generated code like human code that requires a line-by-line peer review, we are just walking the motorcycle. To fix the analogy you're not reviewing the motorcycle, you're reviewing the motorcycle's behaviour during the lap. | | |
| ▲ | visarga 2 days ago | parent [-] | | > This is what code review is for. My point is that visual inspection of code is just "vibe testing", and you can't reproduce it. Even you yourself, 6 months later, can't fully repeat the vibe check "LGTM" signal. That is why the proper form is a code test. |
| |
| ▲ | ben_w 2 days ago | parent | prev [-] | | Yes and no. Yes, I recon coding is dead. No, that doesn't mean there's nothing to learn. People like to make comparisons to calculators rendering mental arithmetic obsolete, so here's an anecdote: First year of university, I went to a local store and picked up three items each costing less than £1, the cashier rang up a total of more than £3 (I'd calculated the exact total and pre-prepared the change before reaching the head of the queue, but the exact price of 3 items isn't important enough to remember 20+ years later). The till itself was undoubtedly perfectly executing whatever maths it had been given, I assume the cashier mistyped or double-scanned. As I said, I had the exact total, the fact that I had to explain "three items costing less than £1 each cannot add up to more than £3" to the cashier shows that even this trivial level of mental arithmetic is not universal. I now code with LLMs. They are so much faster than doing it by hand. But if I didn't already have experience of code review, I'd be limited to vibe-coding (by the original definition, not even checking). I've experimented with that to see what the result is, and the result is technical debt building up. I know what to do about that because of my experience with it in the past, and I can guide the LLM through that process, but if I didn't have that experience, the LLM would pile up more and more technical debt and grind the metaphorical motorbike's metaphorical wheels into the metaphorical mud. | | |
| ▲ | visarga 2 days ago | parent [-] | | > But if I didn't already have experience of code review, I'd be limited to vibe-coding (by the original definition, not even checking). Code review done visually is "just vibe testing" in my book. It is not something you can reproduce, it depends on the context in your head this moment. So we need actual code tests. Relying on "Looks Good To Me" is hand waving, code smell level testing. We are discussing vibe coding but the problem is actually vibe testing. You don't even need to be in the AI age to vibe test, it's how we always did it when manually reviewing code. And in this age it means "walking your motorcycle" speed, we need to automate this by more extensive code tests. | | |
| ▲ | ben_w 2 days ago | parent [-] | | I agree that actual tests are also necessary, that code review is not enough by itself. As LLMs can also write tests, I think getting as close as is sane to 100% code coverage is almost the first thing people should be doing with LLM assistance (and also, "as close as is sane": make sure that it really is a question of "I thought carefully and have good reason why there's no point testing this" rather than "I'm done writing test code, I'm sure it's fine to not test this", because LLMs are just that cheap). However, code review can spot things like "this is O(n^2) when it could be O(n•log(n))", or "you're doing a server round trip for each item instead of parallelising them" etc. You can also ask an LLM for a code review. They're fast and cheap, and whatever the LLM catches is something you get without having to waste a coworker's time. But LLMs have blind spots, and more importantly all LLMs (being trained on roughly the same stuff in roughly the same way) have roughly the same blind spots, whereas human blind spots are less correlated and expand coverage. And code smells are still relevant for LLMs. You do want to make sure they're e.g. using a centralised UI style system and not copy-pasting style into each widget, because duplication wastes tokens and is harder to correctly update with LLMs for much the same reason it is with humans: stuff gets missed during the process when it's copypasta. | | |
| ▲ | visarga 2 days ago | parent [-] | | I am personally working on formalizing the design stage as well, the core concepts being Architecture, Goal, Solution and Implementation. That would make something like the complexity of an algorithm an explicit decision in a graph. It would make constraints and dependencies explicitly formalized. You can track any code to its solution (design stage) and goals, account for everything top-down and bottom-up, and assign tests for all nodes. Take a look here: https://github.com/horiacristescu/archlib/blob/main/examples... (but it's still WIP, I am not there yet) |
|
|
|
|