| |
| ▲ | lmorchard 3 days ago | parent | next [-] | | What you consider fun isn't universal. Some folks don't want to just tinker for half an hour, some folks enjoy getting a particular result that meets specific goals. Some folks don't find the mechanics of putting lines of code together as fun as what the code does when it runs. That might sound like paid work to you, but it can be gratifying for not-you. | | |
| ▲ | chung8123 3 days ago | parent | next [-] | | For me it all the build stuff and scaffolding I have to get in place before I can even start tinkering on a project. I never formally learned all the systems and tools and AI makes all of that 10x easier. When I hit something I cannot figure out instead of googling for 1/2 hour it is 10 minutes in AI. | | |
| ▲ | mbirth 3 days ago | parent [-] | | The difference is that after you’ve googled it for ½ hour, you’ve learned something. If you ask an LLM to do it for you, you’re none the wiser. | | |
| ▲ | qudat 3 days ago | parent | next [-] | | Wrong. I will spend 30 minutes having the LLM explain every line of code and why it's important, with context-specific follow-up questions. An LLM is one of the best ways to learn ... | | |
| ▲ | Akronymus 2 days ago | parent | next [-] | | So far, eqch and every time I used an LLM to help me with something it hallucinated non-existant functions or was incorrect in an important but non-obvious way. Though, I guess I do treat LLM's as a last resort longshot for when other documentation is failing me. | | |
| ▲ | naasking 2 days ago | parent | next [-] | | Knowing how to use LLMs is a skill. Just winging it without any practice or exploration of how the tool fails can produce poor results. | | |
| ▲ | b112 2 days ago | parent [-] | | "You're holding it wrong" 99% of an LLM's usefulness vanishes, if it behaves like an addled old man. "What's that sonny? But you said you wanted that!" "Wait, we did that last week? Sorry let me look at this again" "What? What do you mean, we already did this part?!" | | |
| ▲ | naasking 2 days ago | parent [-] | | Wrong mental model. Addled old men can't write code 1000x faster than any human. | | |
| ▲ | b112 2 days ago | parent [-] | | I'd prefer 1x "wrong stuff" than wrong stuff blasted 1000x. How is that helpful? Further, they can't write code that fast, because you have to spend 1000x explaining it to them. | | |
| ▲ | naasking a day ago | parent [-] | | Except it's not 1000x wrong stuff, that's the point. But don't worry, the Amish are welcoming of new luddites! |
|
|
|
| |
| ▲ | oblio 2 days ago | parent | prev [-] | | Which LLMs have you tried? Claude Code seems to be decent at not hallucinating, Gemini CLI is more eager. I don't think current LLMs take you all the way but a powerful code generator is a useful think, just assemble guardrails and keep an eye on it. | | |
| ▲ | Akronymus 2 days ago | parent [-] | | Mostly chatgpt because I see 0 value in paying for any llm, nor do I wish to gice up my data to any llm provider | | |
| ▲ | Anamon 2 days ago | parent | next [-] | | Speaking as someone who doesn't really like or do LLM-assisted coding either: at least try Gemini. ChatGPT is the absolute worst you could use. I was quite shocked when I compared the two on the same tasks. Gemini gets decent initial results you can build on. ChatGPT generates 99% absolutely unusable rubbish. The difference is so extreme, it's not even a competition anymore. I now understand why Altman announced "Code Red" at OpenAI. If their tools don't catch up drastically, and fast, they'll be one for the history books soon. Wouldn't be the first time the big, central early mover in a new market suddenly disappears, steamrolled by the later entrants. | |
| ▲ | oblio 2 days ago | parent | prev [-] | | They work better with project context and access to tools, so yeah, the web interface is not their best foot forward. That doesn't mean the agents are amazing, but they can be useful. | | |
| ▲ | Akronymus 2 days ago | parent [-] | | A simple "how do I access x in y framework in the intended way" shouldnt require any more context. instead of telling me about z option it keeps hallucinating something that doesnt exist and even says its in the docs when it isnt. Literally just wasting my time | | |
| ▲ | oblio 2 days ago | parent [-] | | I was in the same camp until a few months ago. I now think they're valid tools, like compilers. Not in the sense that everyone compares them (compilers made asm development a minuscule niche of development). But in the sense that even today many people don't use compilers or static analysis tools. But that world is slowly shrinking. Same for LLMs, the non LLM world will probably shrink. You might be able to have a long and successful career without touching them for code development. Personally I'd rather check them out since tools are just tools. |
|
|
|
|
| |
| ▲ | _ikke_ 3 days ago | parent | prev [-] | | As long as what it says is reliable and not made up. | | |
| ▲ | qudat 2 days ago | parent | next [-] | | That's true for internet searching. How many times have you gone to SO, seen a confident answer, tried it, and it failed to do what you needed? | | |
| ▲ | Anamon 2 days ago | parent [-] | | Then you write a comment, maybe even figure out the correct solution and fix the answer. If you're lucky, somebody already did. Everybody wins. That's what LLMs take away. Nothing is given back to the community, nothing is added to shared knowledge, no differing opinions are exchanged. It just steals other people's work from a time when work was still shared and discussed, removes any indication of its source, claims it's a new thing, and gives you no way to contribute back, or even discuss it and maybe get confronted with different opinions of even discovering a better way. Let's not forget that one of the main reasons why LLMs are useful for coding in the first place, is that they scraped SO from the time where people still used it. |
| |
| ▲ | anakaine 3 days ago | parent | prev [-] | | I feel like we are just covering whataboutism tropes now. You can absolutely learn from an LLM. Sometimes.documentation sucks and the LLM has learned how to put stuff together feom examples found in unusual places, and it works, and shows what the documentation failed to demonstrate. And with the people above, I agree - sometimes the fun is in the end process, and sometimes it is just filling in the complexity we do not have time or capacity to grab. I for one just cannot keep up with front end development. Its an insurmountable nightmare of epic proportions. Im pretty skilled at my back end deep dive data and connecting APIs, however. So - AI to help put together a coherent interface over my connectors, and off we go for my side project. It doesnt need to be SOC2 compliant and OWASP proof, nor does it need ISO27001 compliance testing, because after all this is just for fun, for me. |
|
| |
| ▲ | hyperadvanced 3 days ago | parent | prev | next [-] | | You can study the LLM output. In the “before times” I’d just clone a random git repo, use a template, or copy and paste stuff together to get the initial version working. | | |
| ▲ | inferiorhuman 3 days ago | parent [-] | | Studying gibberish doesn't teach you anything. If you were cargo culting shit before AI you weren't learning anything then either. | | |
| ▲ | ben_w 3 days ago | parent | next [-] | | Necessarily, LLM output that works isn't gibberish. The code that LLM outputs, has worked well enough to learn from since the initial launch of ChatGPT. This even though back then you might have to repeatedly say "continue" because it would stop in the middle of writing a function. | | |
| ▲ | inferiorhuman 2 days ago | parent [-] | | Necessarily, LLM output that works isn't gibberish.
Hardly. Poorly conjured up code can still work. | | |
| ▲ | ben_w 2 days ago | parent [-] | | "Gibberish" code is necessary code which doesn't work. Even in the broader use of the term: https://en.wikipedia.org/wiki/Gibberish Especially in this context, if a mystery box solves a problem for me, I can look at the solution and learn something from that solution, c.f. how paper was inspired by watching wasps at work. Even the abject failures can be interesting, though I find them more helpful for forcing my writing to be easier to understand. |
|
| |
| ▲ | oblio 2 days ago | parent | prev [-] | | It's not gibberish. More than that, LLMs frequently write comments (some are fluff but some explain the reasoning quite well), variables are frequently named better than cdx, hgv, ti, stuff like that, plus looking at the reasoning while it's happening provides more clues. Also, it's actually fun watching LLMs debug. Since they're reasonably similar to devs while investigating, but they have a data bank the size of the internet so they can pull hints that sometimes surprise even experienced devs. I think hard earned knowledge coming from actual coding is still useful to stay sharp but it might turn out the balance is something like 25% handmade - 75% LLM made. | | |
| ▲ | inferiorhuman 2 days ago | parent [-] | | they have a data bank the size of the internet so they can
pull hints that sometimes surprise even experienced devs.
That's a polite way of phrasing "they've stolen a mountain of information and overwhelmed resources that humans would use to other find answers." I just discovered another victim: the Renesas forums. Cloudflare is blocking me from accessing the site completely, the only site I've ever had this happen to. But I'm glad you're able to have your fun. it might turn out the balance is something like 25% handmade - 75% LLM made.
Doubtful. As the arms race continues AI DDoS bots will have less and less recent "training" material. Not a day goes by that I don't discover another site employing anti-AI bot software. | | |
| ▲ | ben_w 2 days ago | parent | next [-] | | > they've stolen a mountain of information In law, training is not itself theft. Pirating books for any reason including training is still a copyright violation, but the judges ruled specifically that the training on data lawfully obtained was not itself an offence. Cloudfare has to block so many more bots now precisely because crawling the public, free-to-everyone, internet is legally not theft. (And indeed would struggle to be, given all search engines have for a long time been doing just that). > As the arms race continues AI DDoS bots will have less and less recent "training" material My experience as a human is that humans keep re-inventing the wheel, and if they instead re-read the solutions from even just 5 years earlier (or 10, or 15, or 20…) we'd have simpler code and tools that did all we wanted already. For example, "making a UI" peaked sometime between the late 90s and mid 2010s with WYSIWYG tools like Visual Basic (and the mac equivalent now known as Xojo) and Dreamweaver, and then in the final part of that a few good years where Interface Builder finally wasn't sucking on Xcode. And then everyone on the web went for React and Apple made SwiftUI with a preview mode that kept crashing. If LLMs had come before reactive UI, we'd have non-reactive alternatives that would probably suck less than all the weird things I keep seeing from reactive UIs. | | |
| ▲ | Anamon 2 days ago | parent [-] | | > Cloudfare has to block so many more bots now precisely because crawling the public, free-to-everyone, internet is legally not theft. That is simply not true. Freely available on the web doesn't mean it's in the Public Domain. The "lawfully obtained" part of your argument is patently untrue. You can legally obtain something, but that doesn't mean any use of it is automatically legal as well. Otherwise, the recent Spotify dump by Anna's Archive would be legal as well. It all depends on the license the thing is released under, chosen by the person who made it freely accessible on the web. This license is still very emphatically a legally binding document that restricts what someone can do with it. For instance, since the advent of LLM crawling, I've added the "No Derivatives" clause to the CC license of anything new I publish to the web. It's still freely accessible, can be shared on, etc., but it explicitly prohibits using it for training ML models. I even add an additional clause to that effect, should the legal interpretation of CC-ND ever change. In short, anyone training an LLM on my content is infringing my rights, period. | | |
| ▲ | ben_w 2 days ago | parent [-] | | > Freely available on the web doesn't mean it's in the Public Domain. Doesn't need to be. > The "lawfully obtained" part of your argument is patently untrue. You can legally obtain something, but that doesn't mean any use of it is automatically legal as well. I didn't say "any" use, I said this specific use. Here's the quote from the judge who decided this: 5. OVERALL ANALYSIS.
After the four factors and any others deemed relevant are “explored, [ ] the results [are] weighed together, in light of the purposes of copyright.” Campbell, 510 U.S. at 578. The copies used to train specific LLMs were justified as a fair use. Every factor but the nature of the copyrighted work favors this result. The technology at issue was among the most transformative many of us will see in our lifetimes.
- https://storage.courtlistener.com/recap/gov.uscourts.cand.43...> Otherwise, the recent Spotify dump by Anna's Archive would be legal as well. I specifically said copyright infringement was separate. Because, guess what, so did the judge the next paragraph but one from the quote I just gave you. > For instance, since the advent of LLM crawling, I've added the "No Derivatives" clause to the CC license of anything new I publish to the web. It's still freely accessible, can be shared on, etc., but it explicitly prohibits using it for training ML models. I even add an additional clause to that effect, should the legal interpretation of CC-ND ever change. In short, anyone training an LLM on my content is infringing my rights, period. It will be interesting to see if that holds up in future court cases. I wouldn't bank on it if I was you. |
|
| |
| ▲ | oblio 2 days ago | parent | prev [-] | | > That's a polite way of phrasing "they've stolen a mountain of information and overwhelmed resources that humans would use to other find answers." Yes, but I can't stop them, can you? > But I'm glad you're able to have your fun. Unfortunately I have to be practical. > Doubtful. As the arms race continues AI DDoS bots will have less and less recent "training" material. Not a day goes by that I don't discover another site employing anti-AI bot software. Almost all these BigCos are using their internal code bases as material for their own LLMs. They're also increasingly instructing their devs to code primarily using LLMs. The hope that they'll run out of relevant material is slim. Oh, and at this point it's less about the core/kernel/LLMs than it is about building ol' fashioned procedural tooling aka code around the LLM, so that it can just REPL like a human. Turns out a lot of regular coding and debugging is what a machine would do, READ-EVAL-PRINT. I have no idea how far they're going to go, but the current iteration of Claude Code can generate average or better code, which is an improvement in many places. | | |
| ▲ | inferiorhuman 2 days ago | parent [-] | | The hope that they'll run out of relevant material is slim.
If big corps are training their LLMs on their LLM written code… | | |
| ▲ | oblio 2 days ago | parent [-] | | You're almost there: > If big corps are training their LLMs on their LLM written code <<and human reviewed code>>… The last part is important. | | |
|
|
|
|
|
| |
| ▲ | dcre 2 days ago | parent | prev | next [-] | | This is just not true. I have wasted many hours looking for answers to hard-to-phrase questions and learned very little from the process. If an LLM can get me the same result in 30 seconds, it's very hard for me to see that as a bad thing. It just means I can spend more time thinking about the thing I want to be thinking about. I think to some extent people are valorizing suffering itself. | | |
| ▲ | codr7 2 days ago | parent [-] | | Learning means friction, it's not going to happen any other way. | | |
| ▲ | onemoresoop 2 days ago | parent | next [-] | | Some of it is friction, some of it is play. With AI you can get faster to the play part where you do learn a fair bit. But in a sense I agree that less is retained. I think that is not because of lack of friciton, instead is the fast pace of getting what you want now. You no longer need to make a conscious effort to remember any of it because it's effortless to get it again with AI if you ever need it. If that's what you mean by friction then I agree. | |
| ▲ | dcre 2 days ago | parent | prev | next [-] | | I agree! I just don’t think the friction has to come from tediously trawling through a bunch of web pages that don’t contain the answer to your question. | |
| ▲ | mountain_peak 2 days ago | parent | prev [-] | | "What an LLM is to me is the most remarkable tool that we've ever come up with, and it's the equivalent of a e-bike for our minds" | | |
| ▲ | codr7 a day ago | parent [-] | | Which is about as useful as a bike for our airplanes. |
|
|
| |
| ▲ | CamperBob2 3 days ago | parent | prev | next [-] | | I don't want to waste time learning how to install and configure ephemeral tools that will be obsolete before I ever need to use them again. | | |
| ▲ | ggggffggggg 3 days ago | parent | next [-] | | Exactly, the whole point is it wouldn’t take 30 minutes (more like 3 hours) if the tooling didn’t change all the fucking time. And if the ecosystem wasn’t a house of cards 8 layers of json configuration tall. Instead you’d learn it, remember it, and it would be useful next time. But it’s not. | |
| ▲ | Akronymus 2 days ago | parent | prev [-] | | And I don't want to use tools I don't understand at least to some degree. I always get nervous when I do something but don't know why I do that something | | |
| ▲ | xnx 2 days ago | parent [-] | | Depends on what level of abstraction you're comfortable with. I have no problem driving a car I didn't build. | | |
| ▲ | Akronymus 2 days ago | parent [-] | | I didnt build my car either. But I understand a bit of most of the main mechanics, like how the ABS works, how powered steering does, how an ICE works and so on. |
|
|
| |
| ▲ | qualifck 2 days ago | parent | prev | next [-] | | Not necessarily. The end result of googling a problem might be copying a working piece of code off of stack exchange etc. without putting any work into understanding it. Some people will try to vibe out everything with LLMs, but other people will use them to help engage with their coding more directly and better understand what's happening, not do worse. | |
| ▲ | 3 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | spankibalt 3 days ago | parent | prev | next [-] | | I don't think "learning" is a goal here... | |
| ▲ | ragequittah a day ago | parent | prev | next [-] | | Usually The thing you've learned after googling for half an hour is mostly that google isn't very useful for search anymore. | |
| ▲ | enraged_camel 3 days ago | parent | prev | next [-] | | >> The difference is that after you’ve googled it for ½ hour, you’ve learned something. I've been programming for 15+ years, and I think I've forgotten the overwhelming majority of the things I've googled. Hell, I can barely remember the things I've googled yesterday. | | |
| ▲ | rob74 3 days ago | parent [-] | | Additionally, in the good/bad old days of using StackOverflow, maybe 10% of the answers actually explained how that thing you wanted to do actually worked, the rest just dumped some code on you and left you to figure it out by yourself, or more likely just copy & paste it and be happy when it worked (if you were lucky)... |
| |
| ▲ | ajmurmann 2 days ago | parent | prev | next [-] | | I don't think I'll learn anything by yet again implementing authentication, password reset, forgotten password, etc. | |
| ▲ | fleroviumna 2 days ago | parent | prev | next [-] | | [dead] | |
| ▲ | visarga 3 days ago | parent | prev [-] | | Why train to pedal fast when we already got motorcycles? You are preparing for yesterday's needs. There will never be a time when we need to solve this manually like it's 2019. Even in 2019 we would probably have used Google, solving was already based on extensive web resources. While in 1995 you would really have needed to do it manually. Instead of manual coding training your time is better invested in learning to channel coding agents, how to test code to our satisfaction, how to know if what AI did was any good. That is what we need to train to do. Testing without manual review, because manual review is just vibes, while tests are hard. If we treat AI-generated code like human code that requires a line-by-line peer review, we are just walking the motorcycle. How do we automate our human in the loop vibe reactions? | | |
| ▲ | oblio 2 days ago | parent | next [-] | | > Why train to pedal fast when we already got motorcycles? You are preparing for yesterday's needs. This is funny in the sense that in properly built urban environment bycicles are one of the best ways to add some physical activity in a time constrained schedule, as we're discovering. | |
| ▲ | philipwhiuk 2 days ago | parent | prev | next [-] | | > Instead of manual coding training your time is better invested in learning to channel coding agents All channelling is broken when the model is updated. Being knowledgeable about the foibles of a particular model release is a waste of time. > how to test code to our satisfaction Sure testing has value. > how to know if what AI did was any good This is what code review is for. > Testing without manual review, because manual review is just vibes Calling manual review vibes is utterly ridiculous. It's not vibes to point out an O(n!) structure. It's not vibes to point out missing cases. If your code reviews are 'vibes', you're bad at code review > If we treat AI-generated code like human code that requires a line-by-line peer review, we are just walking the motorcycle. To fix the analogy you're not reviewing the motorcycle, you're reviewing the motorcycle's behaviour during the lap. | | |
| ▲ | visarga 2 days ago | parent [-] | | > This is what code review is for. My point is that visual inspection of code is just "vibe testing", and you can't reproduce it. Even you yourself, 6 months later, can't fully repeat the vibe check "LGTM" signal. That is why the proper form is a code test. |
| |
| ▲ | ben_w 2 days ago | parent | prev [-] | | Yes and no. Yes, I recon coding is dead. No, that doesn't mean there's nothing to learn. People like to make comparisons to calculators rendering mental arithmetic obsolete, so here's an anecdote: First year of university, I went to a local store and picked up three items each costing less than £1, the cashier rang up a total of more than £3 (I'd calculated the exact total and pre-prepared the change before reaching the head of the queue, but the exact price of 3 items isn't important enough to remember 20+ years later). The till itself was undoubtedly perfectly executing whatever maths it had been given, I assume the cashier mistyped or double-scanned. As I said, I had the exact total, the fact that I had to explain "three items costing less than £1 each cannot add up to more than £3" to the cashier shows that even this trivial level of mental arithmetic is not universal. I now code with LLMs. They are so much faster than doing it by hand. But if I didn't already have experience of code review, I'd be limited to vibe-coding (by the original definition, not even checking). I've experimented with that to see what the result is, and the result is technical debt building up. I know what to do about that because of my experience with it in the past, and I can guide the LLM through that process, but if I didn't have that experience, the LLM would pile up more and more technical debt and grind the metaphorical motorbike's metaphorical wheels into the metaphorical mud. | | |
| ▲ | visarga 2 days ago | parent [-] | | > But if I didn't already have experience of code review, I'd be limited to vibe-coding (by the original definition, not even checking). Code review done visually is "just vibe testing" in my book. It is not something you can reproduce, it depends on the context in your head this moment. So we need actual code tests. Relying on "Looks Good To Me" is hand waving, code smell level testing. We are discussing vibe coding but the problem is actually vibe testing. You don't even need to be in the AI age to vibe test, it's how we always did it when manually reviewing code. And in this age it means "walking your motorcycle" speed, we need to automate this by more extensive code tests. | | |
| ▲ | ben_w 2 days ago | parent [-] | | I agree that actual tests are also necessary, that code review is not enough by itself. As LLMs can also write tests, I think getting as close as is sane to 100% code coverage is almost the first thing people should be doing with LLM assistance (and also, "as close as is sane": make sure that it really is a question of "I thought carefully and have good reason why there's no point testing this" rather than "I'm done writing test code, I'm sure it's fine to not test this", because LLMs are just that cheap). However, code review can spot things like "this is O(n^2) when it could be O(n•log(n))", or "you're doing a server round trip for each item instead of parallelising them" etc. You can also ask an LLM for a code review. They're fast and cheap, and whatever the LLM catches is something you get without having to waste a coworker's time. But LLMs have blind spots, and more importantly all LLMs (being trained on roughly the same stuff in roughly the same way) have roughly the same blind spots, whereas human blind spots are less correlated and expand coverage. And code smells are still relevant for LLMs. You do want to make sure they're e.g. using a centralised UI style system and not copy-pasting style into each widget, because duplication wastes tokens and is harder to correctly update with LLMs for much the same reason it is with humans: stuff gets missed during the process when it's copypasta. | | |
| ▲ | visarga 2 days ago | parent [-] | | I am personally working on formalizing the design stage as well, the core concepts being Architecture, Goal, Solution and Implementation. That would make something like the complexity of an algorithm an explicit decision in a graph. It would make constraints and dependencies explicitly formalized. You can track any code to its solution (design stage) and goals, account for everything top-down and bottom-up, and assign tests for all nodes. Take a look here: https://github.com/horiacristescu/archlib/blob/main/examples... (but it's still WIP, I am not there yet) |
|
|
|
|
|
| |
| ▲ | 3 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | jimbokun 3 days ago | parent | prev [-] | | The difference is whether or not you find computers interesting and enjoy understanding how they work. For the people who just want to solve some problem unrelated to computers but require a computer for some part of the task, yes AI would be more “fun”. | | |
| ▲ | phil21 3 days ago | parent | next [-] | | I don’t find this to be true. I enjoy computers quite a bit. I enjoy the hardware, scaling problems, theory behind things, operating systems, networking, etc. Most of all I find what computers allow humanity to achieve extremely interesting and motivating. I call them the worlds most complicated robot. I don’t find coding overly fun in itself. What I find fun is the results I get when I program something that has the result I desire. Maybe that’s creating a service for friends to use, maybe it’s a personal IT project, maybe it’s having commercial quality WiFi at home everyone is amazed at when they visit, etc. Sometimes - even often - it’s the understanding that leads to pride in craftsmanship. But programming itself is just a chore for me to get done in service of whatever final outcome I’m attempting to achieve. Could be delivering bits on the internet for work, or automating OS installs to look at the 50 racks of servers humming away with cable porn level work done in the cabinets. I never enjoyed messing around with HTML at that much in the 90s. But I was motivated to learn it just enough to achieve the cool ideas I could come up with as a teenager and share them with my friends. I can appreciate clean maintainable code, which is the only real reason LLMs don’t scratch the itch as much as you’d expect for someone like me. | | |
| ▲ | tjr 3 days ago | parent | next [-] | | What I really enjoy in programming is algorithms and bit-twiddling and stuff that might be in Knuth or HAKMEM or whatever. That’s fun. I like writing Lisp especially, and doing cool, elegant functional programs. I don’t enjoy boilerplate. I don’t necessarily enjoy all of the error checking and polishing and minutia in turning algorithms into shippable products. I find AI can be immensely helpful in making real things for people to use, but I still enjoy doing what I find fun by hand. | |
| ▲ | girvo 3 days ago | parent | prev [-] | | See, I do though. I enjoy the act, the craft of programming. It's intrinsically fun for me, and has been for the 25 years I've been doing it at this point, and it still hasn't stopped being fun! Different strokes I guess | | |
| ▲ | phil21 2 days ago | parent [-] | | Oh I totally agree! I have a lot of fun chatting with friends/coworkers who are super into programming as an art and/or passion. I just was pushing back on the “you aren’t into computers if you don’t get intrinsic joy out of programming itself” bit. |
|
| |
| ▲ | ben_w 2 days ago | parent | prev [-] | | > The difference is whether or not you find computers interesting and enjoy understanding how they work. I'm a stereotypical nerd, into learning for its own sake. I can explain computers from the quantum mechanics of band gaps in semiconductors up to fudging objects into C and the basics of operating systems with pre-emptive multitasking, virtual memory, and copy-on-write as they were c. 2004. Further up the stack it gets fuzzy (not that even these foundations are not; "basics" of OSes, I couldn't write one); e.g. SwiftUI is basically a magic box, and I find it a pain to work with as a result. LLM output is easier to understand than SwiftUI, even if the LLM itself has much weirder things going on inside. | | |
| ▲ | jiveturkey 2 days ago | parent [-] | | So, can you tell me everything that happens after you type www.google.com<RET> into the browser? ;) | | |
| ▲ | ben_w 2 days ago | parent | next [-] | | Nope, but that was the example I had in mind when I chose my phrasing :) I think I can describe the principles at work with DNS, but not all of how IP packets are actually routed; the physics of beamforming and QAM, but none of the protocol of WiFi; the basics of error correction codes, but only the basics and they're probably out of date; the basic ideas used in private key crypto but not all of HTTPS; I'd have to look up the OSI 7-layer model to remember all the layers; I understand older UI systems (I've even written some from scratch), but I'm unsure how much of current web browsers are using system widgets vs. it all being styled HTML; interrupts as they used to be, but not necessarily as they still are; my knowledge of JavaScript is basic; and my total knowledge of how certificate signing works is the conceptual level of it being an application of public-private key cryptography. I have e.g. absolutely no idea why Chrome is famously a memory hog, and I've never learned how anything is scheduled between cores at the OS level. | |
| ▲ | jimbokun 2 days ago | parent | prev [-] | | Curious if anyone has turned answering this question into an entire book, because it could be a great read. |
|
|
|
| |
| ▲ | arjie 3 days ago | parent | prev | next [-] | | I think a lot of us just discovered that the actual programming isn't the fun part for us. It turns out I don't like writing code as much as I thought. I like solving my problems. The activation energy for a lot of things was much higher than it is now. Now it's pretty low. That's great for me. Baby's sleeping, 3d printer is rolling, and I get to make a little bit of progress on something super quick. It's fantastic. | | |
| ▲ | blitz_skull 3 days ago | parent | next [-] | | This 1000x! I had a bit of an identity crisis with AI first landed and started producing good code. “If I’m not the man who can type quickly, accurately, and build working programs… WHO AM I?” But as you pointed out, I quickly realized I was never that guy. I was the guy who made problems go away, usually with code. Now I can make so many problems go away, it feels like cheating. As it turns out, writing code isn’t super useful. It’s the application of the code, the judgement of which problems to solve and how to solve them, that truly mattered. And that sparks a LOT of joy. | | |
| ▲ | spankibalt 3 days ago | parent [-] | | [flagged] | | |
| ▲ | ragequittah 3 days ago | parent | next [-] | | I imagine this same argument happening when people stopped using machine code and assembly en masse and started using FORTRAN or COBOL. You don't really know what you're doing unless you're spending the effort I spent! | | |
| ▲ | spankibalt 3 days ago | parent [-] | | > "I imagine this same argument happening when people stopped using machine code and assembly en masse and started using FORTRAN or COBOL." Yeah, certainly. But since this has nothing to do with my argument, which was an answer to the very existential question of a (postulated) non-coder, and not a comment on a forgotten pissing contest between coders, it's utterly irrelevant. :( | | |
| ▲ | framapotari 3 days ago | parent [-] | | This is quite funny when you created the pissing contest between "coders" and "non-coders" in this thread. Those labels seem very important to you. | | |
| ▲ | spankibalt 3 days ago | parent [-] | | I didn't "create" the pissing contest, I merely pointed it out in someone else's drivel. And of course, these labels are important to me for (precise) language defines the boundaries of my world; coder vs. non-coder, medico vs. quack, writer vs. analphabet, truth vs. lie, etc. Elementary. | | |
| ▲ | cthalupa 2 days ago | parent [-] | | I find it quite interesting that you categorize non-coders the same as quacks, analphabets, and lies. I would never consider myself a coder - though I can and have written quite a lot of code over the years - because it has always been a means to the ends for me. I don't particularly enjoy writing code. Programming isn't a passion. I can and have built working programs without a line of copy and pasted code off stack overflow or using an LLM. Because I needed to to solve a problem. But there are things I would call myself, things I do and enjoy and am good at. But I wouldn't position people who can't do those things as being the same as a quack. You also claim to not be the one that started the pissing contest, but you called someone who claims to have written plenty of code themselves a coding-illiterate just because now they'd rather use an LLM than do it themselves. I suppose you could claim they are lying about it, or some no true scottsman type argument, but that seems silly. You basically took some people talking about their own opinions on what they find enjoyable, and saying that AI-driven coding scratches that itch for them even more than writing code itself does, and then began to be quite hostile towards them with boatloads of denigrating language and derision. | | |
| ▲ | spankibalt 2 days ago | parent [-] | | > "I find it quite interesting that you categorize non-coders the same as quacks, analphabets, and lies." I categorized them not as "the same", but as examples of concept-delineating polar opposites. This as answer to somebody who essentially trotted out the "but they're just labels!1!!" line, which was already considered intellectually lazy before it was turned into a sad meme by people who married their bongs back in the 90s. > "I would never consider myself a coder - though I can and have written quite a lot of code over the years [...]" Good for you. A coder, to me, is simply somebody who can produce working programs on their own and has the neccessary occupational (self-) respect. This fans out into several degrees of capabilities, of course. > "[...] but you called someone who claims to have written plenty of code themselves a coding-illiterate just because now they'd rather use an LLM than do it themselves. " No. I simply answered this one question: > “If I’m not the man who can [...] build working programs… WHO AM I?” Aside from that I reflected on an insulting(ly daft) but extremely common attitude amongst sloperators, especially on parasocial media platforms: > "As it turns out, writing code isn’t super useful." Imagine I go to some other SIG to say shit like this: As it turns out, [reading and writing words/playing or operating an instrument or tool/drawing/calculating/...] isn’t "super useful". Suckers! I'd expect to get properly mocked and then banned. > "You basically took some people talking about their own opinions on what they find enjoyable, [...]" Congratulations, you're just the next strawmen salesman. For the last time, bambini: I don't care if this guy uses LLMs and enjoys it... for that was never the focus of my argument at all. |
|
|
|
|
| |
| ▲ | jtbayly 3 days ago | parent | prev | next [-] | | You definitely completely misconstrued what was said and meant. It appears you have yet to grapple with the question asked. And I suspect you would be helped by doing so. Let me restate the question for you: If actually writing code can be done without you or any coworker now, by AI, what is your purpose? | | | |
| ▲ | ch4s3 3 days ago | parent | prev | next [-] | | Anyone who can’t read Proust and write a compelling essay about the themes is illiterate! | | |
| ▲ | spankibalt 3 days ago | parent [-] | | One day you actually might discover there's different levels of literacy. Like there's something between 0 and 255! Here's a pointer: Not being able to read (terminus technicus: analphabet) makes you a non-reader, just as not being able to cobble together a working proggie on your own merits makes you a non-coder. Man alive... | | |
| |
| ▲ | jimbokun 3 days ago | parent | prev [-] | | It’s possible to be someone who’s very good at writing quality programs but still enjoy delegating as much of that as possible to AI to focus on other things. | | |
| ▲ | spankibalt 3 days ago | parent [-] | | > "It’s possible to be someone who’s very good at writing quality programs but still enjoy delegating as much of that as possible to AI to focus on other things." That's true, Jimbo. And besides the point, because: 1. It wasn't about someone who's very good at writing quality programs, but someone who perceives themselves as someone who "is not the man who can build working programs". Do you comprehend the difference? 2. The enjoyment of using slopware wasn't part of the argument (see my answer to the question). That's not something I remotely care about. For the question my answer referred to, please see the cited text before the question mark. <3 3. People who define the very solution to the problem as "isn't super useful" do at least two things: They misunderstood, or misunderstand, their capabilities in problem solving/solutions, and most likely (have) delude(d) themselves, and... They look down on people who actually have done, do, and will do the legwork to solve these very problems ("Your work isn't super useful"). Back in the day we called 'em lamers and/or posers. I hope that clears things up. | | |
| ▲ | cthalupa 2 days ago | parent [-] | | > 1. It wasn't about someone who's very good at writing quality programs, but someone who perceives themselves as someone who "is not the man who can build working programs". Do you comprehend the difference? For someone who has taken heavy enjoyment in likening people to analphabets you seem to have entirely misunderstood (or if you understood, heavily misconstrued) the initial point of the person you are responding to. The entire point is that their identity WAS someone who is the man who can build those programs, and now AI was threatening to do the same thing. Unless you a presupposing that anyone who can be happy with the output of LLMs for writing code simply is impossible of having the ability to write quality code themselves. Which would be silly. |
|
|
|
| |
| ▲ | jtbayly 3 days ago | parent | prev | next [-] | | Exactly. And I was never particularly good at coding, either. Pairings with Gemini to finally figure out how to decompile an old Java app so I can make little changes to my user profile and some action files? That was fun! And I was never going to be able to figure out how to do it on my own. I had tried! | | |
| ▲ | jimbokun 3 days ago | parent [-] | | Fewer things sound less interesting to me than that. | | |
| ▲ | jtbayly 3 days ago | parent | next [-] | | Fair enough. But that particular could be anything that has been bothering you but you didn’t have the time or expertise to fix yourself. I wanted that fixed, and I had given up on ever seeing it fixed. Suddenly, in only two hours, I had it fixed. And I learned a lot in the process, too! | |
| ▲ | cmwelsh 3 days ago | parent | prev [-] | | > Fewer things sound less interesting to me than that. To each their own! I think the market for folks who understand their own problems is exploding! It’s free money. |
|
| |
| ▲ | popalchemist 3 days ago | parent | prev | next [-] | | Literally shipping a vide-coded feature as my baby sleeps, while reading this comment thread. It's the wild west again. I love it. | | |
| ▲ | codr7 2 days ago | parent [-] | | Maybe you can tell us the name of the software so we can avoid it? | | |
| ▲ | mrkramer 2 days ago | parent | next [-] | | Google, Facebook, Amazon, Microsoft....they literally all have the vibe coded code; it's not about vibe coded or not, it is about how well the code is designed, efficient and bug free. Ofc pro coders can debug it and fix it better than some amateur coder but still LLMs are so valuable. I let Gemini vibe code little web projects for me and it serves me well. Although you have to explain everything step by step to it and sometimes when it fixes one bug, it accidently introduces another. But we fix bugs together and learn together. And btw when Gemini fixes bugs, it puts comments in the code on how the particular bug was fixed. | | | |
| ▲ | popalchemist 2 days ago | parent | prev [-] | | It's a personal project. No need to be a dick. | | |
| ▲ | codr7 a day ago | parent [-] | | Presenting AI slop as software is about as as big as it gets. |
|
|
| |
| ▲ | RicoElectrico 3 days ago | parent | prev [-] | | This. Busy-beavering is why the desktop Linux is where it is - rewriting stuff, making it "elegant" while breaking backwards compatibility - instead of focusing on the outcome. | | |
| ▲ | int_19h 3 days ago | parent [-] | | macOS breaks backwards compatibility all the time, and yet... | | |
| ▲ | sokoloff 2 days ago | parent [-] | | Other than security-related changes, as a user, I find macOS to be quite generous about its evolution, supporting deprecated APIs for many years, etc. SIP and the transition to a read-only system volume are the only two things that I remember broke things that I noticed. It’s not Windows-level of backwards compatibility, but it’s quite good overall from the user side. |
|
|
| |
| ▲ | freedomben 3 days ago | parent | prev | next [-] | | It's just fun in a different way now. I've long had dozens of ideas for things I wanted to build, and never enough time to really even build one of them. Over the last few months, I've been able to crank out several of these projects to satisfactory results. The code is not a beautiful work of art like I would prefer it to be, and the fun part is no longer the actual code and working in the code base like it used to be. The fun part now is being able to have an app or tool that gets the job I needed done. These are rarely important jobs, just things that I want as a personal user. Some of them have been good enough that I shipped them for other users, but the vast majority are just things I use personally. Just yesterday for example, I used AI to build a GTK app that has a bunch of sports team related sound effects built into them. I could have coded this by hand in 45 minutes, but it only took 10 minutes with AI. That's not the best part though. The best part is that I was able to use AI to get it building into an app image in a container so I can distribute it to myself as a single static file that I can execute on any system I want. Dicking with builds and distribution was always the painful part and something that I never enjoyed, but without it, usage is a pain. I've even gone back to projects I built a decade ago or more and got them building against modern libraries and distributed as RPMs or app images that I can trivially install on all of my systems. The joy is now in the results rather than the process, but it is joy nonetheless. | | |
| ▲ | iamflimflam1 3 days ago | parent | next [-] | | I think, for a lot of people, solving the problem was always the fun part. There is immense pleasure in a nice piece of code - something that is elegant, clever and simple at the same time. Grinding out code to get something finished - less fun… | | |
| ▲ | TuringTest 3 days ago | parent [-] | | It depends. Sometimes they joy is in discovering what problem you are solving, by exploring the space of possibilities on features and workflows on a domain. For that, having elegant and simple software is not needed; getting features fast to try out how they work is the basis of the pleasure, so having to write every detail by hand reduces the fun. | | |
| ▲ | jimbokun 3 days ago | parent [-] | | Sounds like someone who enjoys listening to music but not composing or performing music. | | |
| ▲ | dpkirchner 3 days ago | parent | next [-] | | Or maybe someone DJing instead of creating music from scratch. | |
| ▲ | TuringTest 2 days ago | parent | prev [-] | | Or someone who enjoys playing music but not building their own instrument from scratch. | | |
| ▲ | jimbokun 2 days ago | parent [-] | | No. Building the instrument would be electrical engineering. Playing the instrument would be writing software. |
|
|
|
| |
| ▲ | apitman 3 days ago | parent | prev [-] | | I use LLMs for code at work, but I've been a bit hesitant to dive in for side projects because I'm worried about the cost. Is it necessary to pay $200/mo to actually ship things or will $20/mo do it? Obviously I could just try it myself and see how far I get bit I'm curious to hear from someone a bit further down the path. | | |
| ▲ | vineyardmike 3 days ago | parent | next [-] | | The $20/mo subscription (Claude Code) that I've been using for my side projects has been more than enough for me 90% of the time. I mostly use the cheaper models lately (Haiku) and accept that it'll need a bit more intervention, but it's for personal stuff and fun so that's ok. If you use VSCode, Antigravity or another IDE that's trying to market their LLM integration, then you'll also get a tiny allowance of additional tokens through them. I'll use it for a few hours at a time, a couple days a week, often while watching TV or whatever. I do side projects more on long rainy weekends, and maybe not even every week during the summer. I'll hit the limit if I'm stuck inside on a boring Sunday and have an idea in my head I really wanted to try out and not stop until I'm done, but usually I never hit the limit. I don't think I've hit the limit since I switched my default to Haiku FWIW. The stat's say I've generated 182,661 output tokens in the last month (across 16 days), and total usage if via API would cost $39.67. | |
| ▲ | naught0 a day ago | parent | prev | next [-] | | You can use Gemini for free. Or enable the API and pay a few bucks for variable usage every month. Could be cents if you don't use it much like me | |
| ▲ | indigodaddy 3 days ago | parent | prev | next [-] | | Check out the Google One AI Pro plan ($20/mo) in combination with Antigravity (Google's VS Code thingy) which has access to Opus 4.5. this combo (AG/AI Pro plan/Opus 5.5) is all the rage on Reddit with users reporting incredibly generous limits (which most users say they never meet even with high usage) that resets every 5 hours. | |
| ▲ | ben_w 2 days ago | parent | prev | next [-] | | $20 is fine. I used a free trial before Christmas, and my experience was essentially that my code review speed would've prevented me doing more than twice that anyway… and that's without a full time job, so if I was working full time, I'd only have enough free time to review $20/month of Claude's output. You can vibe code, i.e. no code review, but this builds up technical debt. Think of it as a junior who is doing one sprint's worth of work every 24 hours of wall-clock time when considering how much debt and how fast it will build up. | |
| ▲ | freedomben 3 days ago | parent | prev | next [-] | | Depending on how much you use, you can pay API prices and get pretty far for 20 bucks a month or less. If you exhaust that, surprisingly, I recommend getting Gemini with the Google AI pro subscription. You can use a lot of the Gemini CLi for that | |
| ▲ | ACow_Adonis 2 days ago | parent | prev | next [-] | | In practice, I find it depends on your work scale, topic and cadence. I started on the $20 plans for a bit of an experiment, needing to see about this whole AI thing. And for the first month or two that was enough to get the flavor. It let me see how to work. I was still copy/pasting mostly, thinking about what to do. As i got more confident i moved to the agents and the integrated editors. Then i realised i could open more than one editor or agent at a time while each AI instance was doing its work. I discovered that when I'm getting the AI agents to summarise, write reports, investigate issues, make plans, implement changes, run builds, organise git, etc, now I can alt-tab and drive anywhere between 2-6 projects at once, and I don't have to do any of the boring boiler plate or administrivia, because the AI does that, it's what its great for. What used to be unthinkable and annoying context switching now lets me focus in on different parts of the project that actually matter, firing off instructions, providing instructions to the next agent, ushering them out the door and then checking on the next intern in the queue. Give them feedback on their work, usher them on, next intern. The main task now is kind of managing the scope and context-window of each AI, and how to structure big projects to take advantage of that. Honestly though, i don't view it as too much more than functional decomposition. You've still got a big problem, now how do you break it down. At this rate I can sustain the $100 claude plan, but honestly I don't need to go further than that, and that's basically me working full time in parallel streams, although i might be using it at relatively cheap times, so it or the $200 plan seems about right for full time work. I can see how theoretically you could go even above that, going into full auto-pilot mode, but I feel i'm already at a place of diminishing marginal returns, i don't usually go over the $100 claude code plan, and the AIs can't do the complex work reliably enough to be left alone anyway. So at the moment if you're going full time i feel they're the sweet spot. The $20 plans are fine for getting a flavor for the first month or two, but once you come up to speed you'll breeze past their limitations quickly. | |
| ▲ | camel_Snake 3 days ago | parent | prev | next [-] | | I have a feeling you are using SOTA models at work and aren't used to just how cheap the non-Anthropic/Google/OAI options are these days. GLM's coding subscription is like $6/month if you buy a full year. | |
| ▲ | Marha01 3 days ago | parent | prev [-] | | You can use AI code editor that allows you to use your own API key, so you pay per-token, not a fixed monthly fee. For example Cline or Roo Code. | | |
| ▲ | int_19h 3 days ago | parent [-] | | They all let you do that now, including Claude Code itself. You can choose between pay per token and subscription. Which means that a sensible way to go about those things is to start with a $20 subscription to get access to the best models, and then look at your extra per-token expenses and whether they justify that $200 monthly. |
|
|
| |
| ▲ | xav_authentique 3 days ago | parent | prev | next [-] | | I think this is showing the difference between people who like to /make/ things and those that like to make /things/. People that write software because they see a solution for a problem that can be fixed with software seem to benefit the most of LLM technology. It's almost the inverse for the people that write software because they like the process of writing software. | | |
| ▲ | Defletter 3 days ago | parent | next [-] | | Surely there has to be some level of "getting stuff done"/"achieving a goal" when /making/ things, otherwise you'd be foregoing for-loops because writing each iteration manually is more fun. | | |
| ▲ | recursive 3 days ago | parent | next [-] | | I think you misunderstand the perspective of someone who likes writing code. It's not the pressing of keys on the keyboard. It's figuring out which keys to press. Setting aside for the moment that most loops have a dynamic iteration count, typing out the second loop body is not fun if it's the same as the first. I do code golf for fun. My favorite kind of code to write is code I'll never have to support. LLMs are not sparking joy. I wish I was old enough to retire. | |
| ▲ | jesse__ 3 days ago | parent | prev | next [-] | | I have a 10-year-old side project that I've dumped tens of thousands of hours into. "Ship the game" was an explicit non-goal of the project for the vast majority of that time. Sometimes, the journey is the destination. | | |
| ▲ | pests 3 days ago | parent [-] | | And sometimes the destination is the destination and the journey is a slog. | | |
| ▲ | jesse__ 3 days ago | parent [-] | | I mean, sure. I was just pointing out to the commentor that sometimes "getting stuff done" isn't the point. |
|
| |
| ▲ | xav_authentique 3 days ago | parent | prev [-] | | Sure, but, in the real world, for the software to deliver a solution, it doesn't really matter if something is modelled in beautiful objects and concise packages, or if it's written in one big method. So for those that are more on the making /things/ side of the spectrum, I guess they wouldn't care if the LLM outputs code that has each iteration written separately. It's just that if you really like to work on your craftsmanship, you spend most of the time rewriting/remodelling because that's where the fun is if you're more on the /making/ things side of the spectrum, and LLMs don't really assist in that part (yet?). Maybe LLMs could be used to discuss ways to model a problem space? |
| |
| ▲ | antonvs 3 days ago | parent | prev [-] | | I like both the process and the product, and I like using LLMs. You can use LLMs in whatever way works for you. Objections like the ones in this thread seem to assume that the LLM determines the process, but that’s not true at present. Perhaps they’re worrying about what might happen in future, but more likely they’re just resisting change in the usual way of inventing objections against something they haven’t seriously tried. These objections serve more as emotional justifications to avoid changing, than rational positions. |
| |
| ▲ | hxtk 3 days ago | parent | prev | next [-] | | As I've gotten more experience I've tended to find more fun in tinkering with architectures than tinkering with code. I'm currently working on making a secure zero-trust bare metal kubernetes deployment that relies on an immutable UKI and TPM remote attestation. I'm making heavy use of LLMs for the different implementation details as I experiment with the architecture. As far as I know, to the extent I'm doing anything novel, it's because it's not a reasonable approach for engineering reasons even if it technically works, but I'm learning a lot about how TPMs work and the boot process and the kernel. I still enjoy writing code as well, but I see them as separate hobbies. LLMs can take my hand-optimized assembly drag racing or the joy of writing a well-crafted library from my cold dead hands, but that's not always what I'm trying to do and I'll gladly have an LLM write my OCI layout directory to CPIO helper or my Bazel rule for putting together a configuration file and building the kernel so that I can spend my time thinking about how the big pieces fit together and how I want to handle trust roots and cold starts. | | |
| ▲ | MrDarcy 3 days ago | parent [-] | | So much this. The act of having the agent create a research report first, a detailed plan second, then maybe implement it is itself fun and enjoyable. The implementation is the tedious part these days, the pie in the sky research and planning is the fun part and the agent is a font of knowledge especially when it comes to integrating 3 or 4 languages together. | | |
| ▲ | hxtk 2 days ago | parent [-] | | This goes further into LLM usage than I prefer to go. I learn so much better when I do the research and make the plan myself that I wouldn’t let an LLM do that part even if I trusted the LLM to do a good job. I basically don’t outsource stuff to an LLM unless I know roughly what to expect the LLM output to look like and I’m just saving myself a bunch of typing. “Could you make me a Go module with an API similar to archive/tar.Writer that produces a CPIO archive in the newcx format?” was an example from this project. | | |
| ▲ | lmorchard 2 days ago | parent [-] | | Yeah, this is a lot of what I'm doing with LLM code generation these days: I've been there, I've done that, I vaguely know what the right code would look like when I see it. Rather than spend 30-60 minutes refreshing myself to swap the context back into my head, I prompt Claude to generate a thing that I know can be done. Much of the time, it generates basically what I would have written, but faster. Sometimes, better, because it has no concept of boredom or impatience while it produces exhaustive tests or fixes style problems. I review, test, demand refinements, and tweak a few things myself. By the end, I have a working thing and I've gotten a refresher on things anyway. |
|
|
| |
| ▲ | esperent 3 days ago | parent | prev | next [-] | | Something happened to me a few years ago. I used to write code professionally and contribute to open source a lot. I was freelancing on other people's projects and contributing to mature projects so I was doing hard work, mostly at a low level (I mean algorithms, performance fixes, small new features, rather than high level project architecture). I was working on an open source contribution for a few days. Something that I struggled with, but I enjoyed the challenge and learned a lot from it. As it happened someone else submitted a PR fixing the same issue around the same time. I wasn't bothered if mine got picked or not, it happens. But I remember looking at how similar both of our contributions were and feeling like we were using our brains as computers, just crunching algorithms and pumping in knowledge to create some technical code that was (at the time) impossible for a computer to create. This stayed with me for a while and I decided that doing this technical algorithm crunching wasn't the best use of my human brain. I was making myself interchangeable with all the other human (and now AI) code crunchers. I should move on to a higher level, either architectural or management. This was a big deal for me because I did love (and still do) deeply understanding algorithms and mathematics. I was extremely fortunate with timing as it was just around one year before AI coding became mainstream but early enough that it wasn't a factor in this shift. Now an AI could probably churn out a decent version of that algorithm in a few minutes. I did move on to open my own business with my partner and haven't written much code in a few years. And when I do now I appreciate that I can focus on the high level stuff and create something that my business needs in a few hours without exhausting myself on low level algorithm crunching. This isn't meant to put down the enjoyment of writing code for code's sake. I still do appreciate well written code and the craft that goes into it. I'm just documenting my personal shift and noting that enjoyment can be found on both sides. | |
| ▲ | wincy 3 days ago | parent | prev | next [-] | | I’ve got kids and so seldom find myself with the time or energy to work on something. Cursor has really helped in that regard. I have an extensive media collection of very large VR video files with very unhelpful names. I needed to figure out a good way to review which ones I wanted to keep and discard (over 30TB, almost 2000 files). It was fun sitting using Cursor with Claude to work on setting up a quick web UI, with calls out to ffmpeg to generate snapshots. It handled the “boring parts” with aplomb, getting me a html page with a little JavaScript to serve as my front end, and making a super simple API. All this was still like 1000 lines and would have taken me days, or I would have copied some boilerplate then modified it a little. The problems Claude couldn’t figure out were also similarly interesting, like its syntax to the ffmpeg calls were wrong and not skipping all the frames we didn’t want to generate, so it was taking 100x longer to generate than was necessary seeking through every file, then I made some optimizations in how I had it configured, then realizing I’d generated thumbnails for 3 hours only for them to not display well on the page as it was an 8x1 tile. At that point Claude wanted to regenerate all the thumbnails and I said “just display the image twice, with the first half displayed the first time and the second half displayed the second time, saving myself a few hours. Hacky, but for a personal project, the right solution. I still felt like I was tinkering in a way I haven’t in awhile, and a project that I’d never have gotten around to and instead have just probably bought another new hard drive, took me a couple hours, most of which was actually marking the files as keep or delete. I ended up deleting 12TB of stuff I didn’t want, which it felt cool to write myself a bespoke tool rather than search around on the off chance that such a thing already exists. It also gave me a mental framework of how to approach little products like this in the future, that often a web ui and a simple API backend like Node making external process calls is going to be easier than making a full fat windows UI. I have a similarly sized STL library from 3D printing and think I could apply mostly the same idea to that, in fact it’s 99% the same except for swapping out the ffmpeg call to something to generate a snapshot of the stl at a few different angles. | |
| ▲ | cco 3 days ago | parent | prev | next [-] | | There are many people who enjoy spending an afternoon working on a classic car. There are also many people who enjoy spending an afternoon driving a classic car. Sometimes there are people who enjoy both. Sometimes there are people that really like driving but not the tinkering and some who are the opposite. | | | |
| ▲ | Defletter 3 days ago | parent | prev | next [-] | | I yearn for the mindset where I actively choose to accomplish comparatively little in the brief spells I have to myself, and remain motivated. Part of what makes programming fun for me is actually achieving something. Which is not to say you have to use AI to be productive, or that you aren't achieving anything, but this is not the antithesis of what makes programming fun, only what makes it fun for you. | |
| ▲ | 6r17 3 days ago | parent | prev | next [-] | | Ultimately it's up to the user to decide what to do with his time ; it's still a good bargain that leaves a lot of sovereignty to the user. I like to code a little too much ; got into deep tech to capacities I couldn't imagine before - but at some point you hit rock bottom and you gotta ship something that makes sense. I'm like a really technical "predator" - in a sense where to be honest with myself - it has almost become some way of consumption rather than pure problem solving. For very passionate people it can be difficult to be draw the line between pleasure and work - especially given that we just do what we like in the first place - so all that time feel robbed from us - and from the standpoint of "shipper" who didn't care about it in the first place it feels like freedom. But I'd argue that if anyone wants to jump into technical stuff ; it has never been so openly accessible - you could join some niche slack where some competent programmers were doing great stuff. Today a solo junior can ship you a key-val that is going to be fighting redis in benchmarks. It really is not a time to slack down in my opinion - everything feels already existing and mostly already dealt with. But again - for those who are frustrated with the status-quo ; they will always find something to do. I get you however that this has created a very different space where past acquired skill-sets don't necessarily translate as well today - maybe it's just going to be different to find it's space than it was 10 years ago. I like that the cards have be re-dealt though - it's arguably way more open than the stack-overflow era and pre-ai where knowledge was much more difficult to create. | |
| ▲ | plagiarist 3 days ago | parent | prev | next [-] | | I do have productivity goals! I want to spend the half hour I have on the part I think is fun. Not on machine configuration, boilerplate, dependency resolution, 100 random errors with new frameworks that are maybe resolved with web searches. | |
| ▲ | simonw 3 days ago | parent | prev | next [-] | | If you only get one or two half-hours a week it's probably more fun to use those to build working software than it is to inch forward on a project that won't do anything interesting for several more months. | |
| ▲ | ch4s3 3 days ago | parent | prev | next [-] | | For me it automates a lot of the boilerplate that usually bogs me down on side projects. I cal spin up all of the stuff I hate doing quickly and then fiddle with the interesting parts inside of a working scaffold of code. I recently did this with an elixir wrapper around some Erlang OTP code o wanted to use. Figuring out how to clue together all of the parts that touched the Erlang and tracing all of the arguments through old OTP code would have absolutely stopped me from bothering with this in the past. Instead I’m having fun playing with the interface of my tool in ways that matter for my use case. | |
| ▲ | ashtonshears 3 days ago | parent | prev | next [-] | | I enjoy coding for the ability to turn ideas into software. Seeing more rapid feature development, and also more rapid code cleanup and project architecture cleanup is what makes AI assisted coding enjoyable to me | |
| ▲ | yieldcrv 3 days ago | parent | prev | next [-] | | Look, yeah one shotting stuff makes generic UIs, impressive feat but generic its getting years of sideprojects off the ground for me now in languages I never learned or got professional validation for: rust, lua for roblox … in 2 parallel terminal windows and Claude Code instances all while I get to push frontend development further and more meticulously in a 3rd. UX heavy design with SVG animations? I can do that now, thats fun for me I can make experiences that I would never spend a business Quarter on, I can rapidly iterate in designs in a way I would never pay a Fiverr contractor or three for for me the main skill is knowing what I want, and its entirely questionable about whether that’s a moat at all but for now it is because all those “no code” seeking product managers and ideas guys are just enamored that they can make a generic something compile I know when to point out the AI contradicted itself in a code concept, when to interrupt when its about to go off the rails So far so great and my backend deployment proficiency has gone from CRUD-app only to replicating, understanding and superpassing what the veteran backend devs on my teams could do I would previously call myself full stack, but knowing where my limits in understanding are | |
| ▲ | lowbloodsugar 3 days ago | parent | prev | next [-] | | I enjoy noodling around with pointers and unsafe code in Rust. Claude wrote all the documentation, to Rust standards, with nice examples for every method. I decided to write an app in Rust with a React UI, and Claude wrote almost all the typescript for me. So I’ve used Claude at both ends of the spectrum. I had way more fun in every situation. AI is, fortunately, very bad at the things I find fun, at least for now, and very good at the things I find booooring (read in Scot Pilgrim voice). | |
| ▲ | framapotari 3 days ago | parent | prev | next [-] | | I find it interesting how you take your experience and generalize it by saying "you" instead of "I". This is how I read your post: > I don't know but to me this all sounds like the antithesis of what makes programming fun. I don't have productivity goals for hobby coding where I'd have to make the most of your half an hour -- that sounds too much like paid work to be fun. If I have a half an hour, I tinker for a half an hour and enjoy it. Then I continue when I have another half an hour again. (Or push into night because I can't make myself stop.) Reading it like this makes it obvious to me that what you find fun is not necessarily what other people find fun. Which shouldn't come as a surprise. Describing your experience and preferences as something more is where the water starts getting muddy. | |
| ▲ | satvikpendem 3 days ago | parent | prev | next [-] | | > There are two sorts of projects (or in general, people): artisans, and entrepreneurs. The latter see code as a means to an end, possibly monetized, and the former see code as the end in itself. Me from 9 days ago: https://news.ycombinator.com/item?id=46391392#46398917 | |
| ▲ | krisgenre 3 days ago | parent | prev | next [-] | | I have nearly two decades of programming experience which is mostly server side. The other day I wanted a quick desktop (Linux) program to chat with an LLM. Found out about Viciane launcher, then chalked out an extension in react (which I have never used) to chat with an LLM using OpenAI compatible API. Antigravity wrote a bare minimum working extension in a single prompt. I didn't even need to research how to write an extension for an app released only three to five months ago. I then used AI assistance to add more features and polish the UI. This was a fun weekend but I would have procrastinated forever without a coding agent. | |
| ▲ | css_apologist 2 days ago | parent | prev | next [-] | | LLMs are really showing how different programmers are from one another i am in your camp, i get 0 satisfaction out of seeing something appear on the screen which i don't deeply understand i want to feel the computer as i type, i've recently been toying with turning off syntax highlighting and LSPs (not for everyone), and i am surprised at the lack of distractions and feeling of craft and joy it brings me | |
| ▲ | chrysoprace 3 days ago | parent | prev | next [-] | | I think it just depends on the person or the type of project. If I'm learning something or building a hobby project, I'll usually just use an autocomplete agent and leave Claude Code at work. On the other hand, if I want to build something that I actually need, I may lean on AI assistants more because I'm more interested in the end product. There are certain tasks as well that I just don't need to do by hand, like typing an existing SQL schema into an ORM's DSL. | |
| ▲ | ryang2718 3 days ago | parent | prev | next [-] | | I too have found this. However, I absolutely love being able to mock up a larger idea in 30 minutes to assess feasibility as a proof of concept before I sink a few hours into it. | |
| ▲ | popalchemist 3 days ago | parent | prev | next [-] | | Some people build because they enjoy the mechanics. Others build because they want to use the end product. That camp will get from A to B much more easily with AI, because for them it was never about the craft. And that's more than OK. | |
| ▲ | srcreigh 3 days ago | parent | prev | next [-] | | Historically, tinkerers had to stay within an extremely limited scope of what they know well enough to enjoy working on. AI changes that. If someone wants to code in a new area, it's 10000000x easier to get started. What if the # of handwritten lines of code is actually increasing with AI usage? | |
| ▲ | bdcravens 3 days ago | parent | prev | next [-] | | The problem with modern web development is that if you're not already doing it everyday, climbing the tree of dependencies just to get to the point where you have something show up on screen can be exhausting, and can take several of those half hour sessions. | |
| ▲ | Nevermark 2 days ago | parent | prev | next [-] | | Is the manual coding part of programming still fun or not? We have a lot of opinions on either side here. I think the classic division of problems being solved might, for most people, solve this seeming contradiction. For every problem, X% is solving the necessary complexity of the problem. Taming the original problem, in relation to what computers are capable of doing. With the potential of some relevant well implemented libraries or API’s helping to close that gap. Work in that scenario rarely feels like wasted time. But in reality, there is almost always another problem we have to solve, the Y%=(1-X) of the work required for an actual solution that involves wrangling with mismatches in available tools from the problem being solved. This can be relatively benign, just introducing some extra cute little puzzles, that make our brains feel smart as we successfully win wack-a-mole. A side game that can even be refreshing. Or, the stack of tools, and their quirks, that we need to use can be an unbounded (even compounding) generative system of pervasive mismatches and pernicious non-obvious, not immediately recognizable, trenches we must a 1000 little bridges, and maybe a few historic bridges, just to create a path back to the original problem. And it is often evident that all this work is an artifact of 1000 less than perfect choices by others. (No judgement, just a fact of tool creation having its own difficulties.) That stuff can become energy draining to say the list. I think high X problems are fun to solve. Most of our work goes into solving the original problem. Even finding out it was more complex than we thought feels like meaningful drama and increase the joy of resolving. High Y problems involve vast amounts of glue code, library wrappers with exception handling, the list in any code base can be significant. Even overwhelm the actual problem solving code. And all those mismatches often hold us back, to where our final solution inevitable has problems in situations we hope never happen, until we can come back for round N+1, for unbounded N. Any help from AI for the latter is a huge win. Those are not “real” problems. As tool stack change, nobody will port Y-type solutions forward. (I tell myself so I can sleep at night). So that’s it. We are all different. But what type of acceleration AI gives us on type-Y problems is most likely to feel great. Enabling. Letting us harder on things that are more important and lasting. And where AI is less of a boost, but still a potentially welcome one, as an assistant. | |
| ▲ | christina97 3 days ago | parent | prev | next [-] | | I derive the majority of my hobby satisfaction from getting stuff done, not enjoying the process of crafting software. We probably enjoy quite different aspects of tinkering! LLMs make me have so much more fun. | |
| ▲ | ranger_danger 3 days ago | parent | prev | next [-] | | I think there can be other equally valid perspectives than your own. Some people have goals of actually finishing a project instead of just "tinkering"... and that's ok. Some say it might even be necessary. | |
| ▲ | themafia 3 days ago | parent | prev | next [-] | | On top of that there's a not insignificant chance you've actually just stolen the code through an automated copyright whitewashing system. That these people believe they're adding value while never once checking if the above is true really disappoints me with the current direction of technology. LLMs don't make everyone better, they make everything a copy. The upwards transfer of wealth will continue. | |
| ▲ | dukeyukey 3 days ago | parent | prev | next [-] | | Which is fine, because those things are what makes programming fun for you. Not for others. | |
| ▲ | schwartzworld 3 days ago | parent | prev | next [-] | | What about the boring parts of fun hobby projects? | |
| ▲ | fartfeatures 3 days ago | parent | prev [-] | | You could make the same argument about the printing press. Some people like forming the letters by hand, others enjoy actually writing. | | |
| ▲ | alwillis 3 days ago | parent | next [-] | | Actually, the invention of the printing press in 1450 created a similar disruption, economic panic and institutional fear similar to what we're experiencing now: For centuries, the production of books was the exclusive domain of professional scribes and monks. To them, the printing press was an existential threat. Job Displacement: Scribes in Paris and other major cities reportedly went on strike or petitioned for bans, fearing they would be driven into poverty. The "Purity" Argument: Some critics argued that hand-copying was a spiritual act that instilled discipline, whereas the press was "mechanical" and "soulless." Aesthetic Elitism: Wealthy bibliophiles initially looked down on printed books as "cheap" or "ugly" compared to hand-illuminated manuscripts. Some collectors even refused to allow printed books in their libraries to maintain their prestige. Sound familiar? From "How the Printing Press Reshaped Associations" -- https://smsonline.net.au/blog/how-the-printing-press-reshape... and "How the Printing Press Changed the World" -- https://www.koolchangeprinting.com/post/how-the-printing-pre... | | |
| ▲ | stryan 3 days ago | parent | next [-] | | I've seen this argument a few times before and I'm never quite convinced by it because, well, all those arguments are correct. It was an existential threat to the scribes and destroyed their jobs, the majority of printed books are considered less aesthetically pleasing than a properly illuminated manuscript, and hand copying is considered a spiritual act by many traditions. I'm not sure if I say it's a correct argument, but considering everyone in this thread is a lot closer to being a scribe than a printing press owner, I'm surprised there's less sympathy. | | |
| ▲ | gamewithnoname 3 days ago | parent | next [-] | | Exactly. What makes it even more odd for me is they are mostly describing doing nothing when using their agents. I see the "providing important context, setting guardrails, orchestration" bits appended, and it seems like the most shallow, narrowest moat one can imagine. Why do people believe this part is any less tractable for future LLMs? Is it because they spent years gaining that experience? Some imagined fuzziness or other hand-waving while muttering something about the nature of "problem spaces"? That is the case for everything the LLMs are toppling at the moment. What is to say some new pre-training magic, post-training trick, or ingenious harness won't come along and drive some precious block of your engineering identity into obsolescence? The bits about 'the future is the product' are even stranger (the present is already the product?). To paraphrase theophite on Bluesky, people seem to believe that if there is a well free for all to draw from, that there will still exist a substantial market willing to pay them to draw from this well. | | |
| ▲ | fartfeatures 3 days ago | parent [-] | | Having AI working with and for me is hugely exciting. My creativity is not something an AI can outmode. It will augment it. Right now ideas are cheap, implementation is expensive. Soon, ideas will be more valuable and implementation will be cheap. The economy is not zero sum nor is creativity. | | |
| |
| ▲ | alwillis 3 days ago | parent | prev | next [-] | | The point being missed is the printing press led to tens of millions of jobs and billions of dollars in revenue. So far, when a new technology is introduced that people were initially afraid of, end up creating a whole new set of jobs and industries. | |
| ▲ | ako 3 days ago | parent | prev [-] | | But the world is better of with the scribes unemployed: ideas get to spread, more people can educate themselves through printed books. Maybe the world is better off with fewer coders, as more software ideas can materialize into working software faster? |
| |
| ▲ | jimbokun 3 days ago | parent | prev [-] | | Well the lesson is that for all of us who invested a lot of time and effort to become good software developers the value of our skill set is now near zero. | | |
| ▲ | fartfeatures 3 days ago | parent [-] | | Many of the same skills that we honed by investing that time and effort into being good software developers make us good AI prompters, we simply moved another layer of abstraction up the stack. |
|
| |
| ▲ | vehemenz 3 days ago | parent | prev | next [-] | | This does seem to be what many are arguing, even if the analogy is far from perfect. | |
| ▲ | anhner 3 days ago | parent | prev [-] | | Exactly! ...If the printing press spouted gibberish every 9 words. | | |
| ▲ | simonw 3 days ago | parent [-] | | That was LLMs in 2023. | | |
| ▲ | fragmede 3 days ago | parent [-] | | Respect to you. I ran out of energy to correct people's dated misconceptions. If they want to get left behind, it's not my problem. | | |
| ▲ | munksbeer 3 days ago | parent [-] | | At some point no-one is going to have to argue about this. I'm guessing a bit here, but my guess is that within 5 years, in 90%+ jobs, if you're not using an AI assistant to code, you're going to be losing out on jobs. At that point, the argument over whether they're crap or not is done. I say this as someone who has been extremely sceptical over their ability to code in deep, complicated scenarios, but lately, claude opus is surprising me. And it will just get better. | | |
| ▲ | int_19h 3 days ago | parent [-] | | > At that point, the argument over whether they're crap or not is done. Not really, it just transforms into a question of how many of those jobs are meaningful anyway, or more precisely, how much output from them is meaningful. | | |
| ▲ | munksbeer 2 days ago | parent [-] | | I don't agree. I've recently started using claude more than dabbling and I'm getting good use out of it. Not every task will be suitable at the moment, but many are. Give claude lots of direction (I've been creating instructions.txt files) and iterate on those. Ask claude to generate a plan and write it out to a file. Read the file, correct what needs correcting, then get it to implement. It works pretty well, you'll probably be surprised. I'm still doing a lot of thought work, but claude is writing a lot of the actual code. |
|
|
|
|
|
|
|