| ▲ | Folcon 15 hours ago |
| I'm going to try and be honest with you because I'm where you were at 3 months ago I honestly don't think there's anything I can say to convince you because from my perspective that's a fools errand and the reason for that has nothing to do with the kind of person either of us are, but what kind of work we're doing and what we're trying to accomplish The value I've personally been getting which I've been valuing is that it improves my productivity in the specific areas where it's average quality of response as one shot output is better than what I would do myself because it is equivalent to me Googling an answer, reading 2 to 20 posts, consolidating that information together and synthesising an output And that's not to say that the output is good, that's to say that the cost of trying things as a result is much cheaper It's still my job to refine, reflect, define and correct the problem, the approach etc I can say this because it's painfully evident to me when I try and do something in areas where it really is weak and I honestly doubt that the foundation model creators presently know how to improve it My personal evidence for this is that after several years of tilting those windmills, I'm successfully creating things that I have on and off spent the last decade trying to create successfully and have had difficulty with not because I couldn't do it, but because the cost of change and iteration was so high that after trying a few things and failing, I invariably move to simplifying the problem because solving it is too expensive, I'm now solving a category of those problems now, this for me is different and I really feel it because that sting of persistent failure and dread of trying is absent now That's my personal perspective on it, sorry it's so anecdotal :) |
|
| ▲ | bigfishrunning 13 hours ago | parent | next [-] |
| >The value I've personally been getting which I've been valuing is that it improves my productivity in the specific areas where it's average quality of response as one shot output is better than what I would do myself because it is equivalent to me Googling an answer, reading 2 to 20 posts, consolidating that information together and synthesising an output >And that's not to say that the output is good, that's to say that the cost of trying things as a result is much cheaper But there's a hidden cost here -- by not doing the reading and reasoning out the result, you have learned nothing and your value has not increased. Perhaps you extended a bit less energy producing this output, but you've taken one more step down the road to atrophy. |
| |
| ▲ | rectang 7 hours ago | parent | next [-] | | Seeing the code that the LLM generates and occasionally asking it to explain has been an effective way to improve my understanding. It's better in some ways than reading documentation or doing tutorials because I'm working on a practical project I'm highly motivated by. I agree that there is benefit in doing research and reasoning, but in my experience skill acquisition through supervising an LLM has been more efficient because my learning is more focused. The LLM is a weird meld of domain expert/sycophant/scatterbrain but the explanations it gives about the code that it generates are quite educational. | |
| ▲ | ben_w 13 hours ago | parent | prev | next [-] | | I think there's a potential unstated assumption here, though forgive me if it was made explicit elsewhere and/or I missed it. LLM-assisted can be with or without code review. The original meaning of "vibe coding" was without, and I absolutely totally agree this rapidly leads to a massive pile of technical debt, having tried this with some left-over credit on a free trial specifically to see what the result would be. Sure, it works, but it's a hell of a mess that will make future development fragile (unless the LLMs improve much faster than I'm expecting) for no good reason. Before doing that, I used Claude Code the other way, with me doing code reviews to make sure it was still aligned with my ideas of best practices. I'm not going to claim it was perfect, because it did a python backend and web front end for a webcam in one case and simultaneously on a second project a browser-based game engine and example game for that engine and on a third simultaneous project a joke programming language, and I'm not a "real" python dev or "real" web dev or any kind of compiler engineer (last time I touched Yacc before this joke language was 20 years earlier at university). But it produced code I was satisfied I could follow, understand, wasn't terrible, had tests. I wouldn't let a junior commit blindly without code review and tests because I know what junior code looks like from all the times I've worked with juniors (or gone back to 20 year old projects of my own), but even if I was happy to blindly accept a junior's code, or even if the LLM was senior-quality or lead quality, the reason you're giving here means code review before acceptance is helpful for professional development even when all the devs are at the top of their games. | | |
| ▲ | bigfishrunning 12 hours ago | parent | next [-] | | Yes, but I'm talking about more then code review -- there is a ton of value in discovering all of the ways not to solve a problem. When reading 25 forum posts or whatever in trying to write some function, you're learning more then just the answer. You're picking up a ton of context about how these sorts of problems are solved. If all you're doing is reviewing the output of some code generator, your mental context is not growing in the same way. | | |
| ▲ | 0x262d 12 hours ago | parent [-] | | I'm curious if you think the same thing was lost with the transition from reading man pages and first-party documentation to going to stackoverflow or google first (at least, I assume the former was more common a couple decades ago) | | |
| ▲ | bigfishrunning 12 hours ago | parent | next [-] | | What was lost in that transition was the required quality of that first-party documentation decreased; generally that first party documentation simply didn't contain enough information, so you needed to determine things empirically or read source code to get more information. I do think the culture of "copy-and-paste from stackoverflow" harmed the general competency of programmers, but having more third-party information available was only a positive thing. | |
| ▲ | newsoftheday 10 hours ago | parent | prev | next [-] | | Before 2022 age of modern AI, man pages, SO and Google all were the results of humans, not AI fabrication and hallucination. | |
| ▲ | coldtea 7 hours ago | parent | prev [-] | | A lot was lost then too. |
|
| |
| ▲ | Freebytes 9 hours ago | parent | prev [-] | | Merely choosing lines to copy and paste from one file of your own code to another is a learning experience for your brain. AI is excellent for removing a lot of grunt work, but that type of work also reinforces your brain even if you think you are learning nothing. Something can still be lost even if AI is merely providing templates or scaffolding. The same can be said of using Google to find examples, though. You should try to come up with the correct function name or parameter list yourself in your head before using a search engine or AI. And that is for the moist simple examples, e.g. "SQL table creation example". These should be things we know off the top of our heads, so we should first try to type it out before we go to look for an answer. |
| |
| ▲ | Aeolun 10 hours ago | parent | prev | next [-] | | > By not doing the reading and reasoning out the result, you have learned nothing and your value has not increased AI helps at the margins. It’s like adding anti-piracy. Some people would simply never have bought the game unless they can pirate it. There’s a large volume of simple tools, or experimental software that I would simply never had the time to build the traditional way. | |
| ▲ | Folcon 12 hours ago | parent | prev [-] | | I mean you're not wrong I suppose the way I approach this is, I use libraries which solve problems that I have, that in principle understand, because I know and understand the theory, but in practice I don't know the specific details, because I've not implemented the solution myself And honestly, it's not my job to solve everything, I've just got to build something useful or that serves my goals I basically put LLM's into that category, I'm not much of a NIH kinda person, I'm happy to use libraries, including alpha ones on projects if they've been vetted over the range of inputs that I care about, and I'm not going to go into how to do that here, because honestly it's not that exciting, but there's very standard boring ways to produce good guarantees about it's behaviour, so as long as I've done that, I'm pretty happy So I suppose what I'm saying is that isn't a hidden cost to me, it's a pragmatic decision I made that I was happy with the trade off :) When I want to learn, and believe me I do now and again, I'll focus on that there :) | | |
| ▲ | newsoftheday 10 hours ago | parent [-] | | > I use libraries > I basically put LLM's into that category That says a lot to be sure. | | |
| ▲ | Folcon 6 hours ago | parent [-] | | Seeing as you've chosen to be ambiguous, I'll interpret your comment positively :) Otherwise feel free to put forward a criticism |
|
|
|
|
| ▲ | misja111 14 hours ago | parent | prev | next [-] |
| No I agree with you, there are area's where AI is helping amazingly. Every now and then it helps me with some issue as well, which would have cost me hours earlier and now it's done in minutes. E.g. some framework that I'm not that familiar with, or doing the scaffolding for some unit test. However this is only a small portion of my daily dev work. For most of my work, AI helps me little or not at all. E.g. adding a new feature to a large codebase: forget it. Debugging some production issue: maybe it helps me a little bit to find some code, but that's about it. And this is what my post was referring to: not that AI doesn't help at all, but to the crazy claims (10x speedup in daily work) that you see all over social media. |
|
| ▲ | brianwawok 14 hours ago | parent | prev | next [-] |
| Example for me: I am primarily a web dev today. I needed some kuberntes stuff setup. Usually that’s 4 hours of google and guess and check. Claude did it better in 15 minutes. Even if all it does is speed up the stuff i suck at, that’s plenty. Oh boy docker builds, saves my bacon there too. |
| |
| ▲ | Draiken 14 hours ago | parent [-] | | And you learned nothing and have no clue if what it spit out is good or not. How can you even assume what it did is "better" if you have no knowledge of kubernetes in the first place? It's mere hope. Sure it gets you somewhere but you learned nothing in the way and now depend on the LLM to maintain it forever given you don't want to learn the skill. I use LLMs to help verify my work and it can sometimes spot something I missed (more often it doesn't but it's at least something). I also automate some boring stuff like creating more variations of some tests, but even then I almost always have to read its output line by line to make sure the tests aren't completely bogus. Thinking about it now it's likely better if I just ask for what scenarios could be missing, because when they write it, they screw it up in subtle ways. It does save me some time in certain tasks like writing some Ansible, but I have to know/understand Ansible to be confident in any of it. These "speedups" are mostly short term gains in sacrifice for long term gains. Maybe you don't care about the long term and that's fine. But if you do, you'll regret it sooner or later. My theory is that AI is so popular because mediocrity is good enough to make money. You see the kind of crap that's built these days (even before LLMs) and it's mostly shit anyways, so whether it's shit built by people or machines, who cares, right? Unfortunately I do, and I rather we improve the world we live in instead of making it worse for a quick buck. IDK how or why learning and growing became so unpopular. | | |
| ▲ | dpark 11 hours ago | parent | next [-] | | > Sure it gets you somewhere but you learned nothing in the way and now depend on the LLM to maintain it forever given you don't want to learn the skill. The kind of person who would vibe code a bunch of stuff and push it with zero understanding of what it does or how it does it is the kind of person who’s going to ruin the project with garbage and technical debt anyway. Using an LLM doesn’t mean you shouldn’t look at the results it produces. You should still check it results. You should correct it when it doesn’t meet your standards. You still need to understand it well enough to say “that seems right”. This isn’t about LLMs. This is just about basic care for quality. But also, I personally don’t care about being an expert at every single thing. I think that is an unachievable dream, and a poor use of individual time and effort. I also pay people to do stuff like maintenance on my car and installing HVAC systems. I want things done well. That doesn’t mean I have to do them or even necessarily be an expert in them. | |
| ▲ | Bombthecat 11 hours ago | parent | prev | next [-] | | I notice this already after around of 6 months heavy usage. Skills decline, even information gathering etc | | |
| ▲ | jpadkins 9 hours ago | parent [-] | | I think it is more accurate to say some skills are declining (or not developing) while a different set of skills are improving (the skill of getting an LLM to produce functional output). Similar to if someone started writing a lot of C, their assembly coding skills may decline (or at least not develop). I think all higher levels of abstraction will create this effect. | | |
| ▲ | llmslave2 7 hours ago | parent [-] | | > while a different set of skills are improving (the skill of getting an LLM to produce functional output Lmaooooo |
|
| |
| ▲ | p410n3 13 hours ago | parent | prev [-] | | I agree with both of your points since I use LLMs for things I am not good at and dont give a single poop about. The only things i did with LLMs are three examples from the last two years: - Some "temporary" tool I built years ago as a pareto-style workaround broke. (As temporary tools do after some years). Its basically a wrapper that calls a bunch of XSLs on a bmecat.xml every 3-6 months. I did not care to learn XSL back then and I dont care to do it now. Its arcane and non-universal - some stuff only works with certain XSL processors. I asked the LLM to fix stuff 20 times and eventually it got it. Probably got that stuff off my back another couple years. - Some third party tool we use has a timer feature that has a bug where it sets a cookie everytime you see the timer once per timer (for whatever reason... the timers are set to end a certain time and there is no reason to attach it to a user). The cookies have a life time of one year. We run time limited promotions twice a week so that means two cookies a week for no reason. Eventually our WAF got triggered because it has a rule to block requests when headers are crazy long - which they were because cookies. I asked an LLM to give me a script that clears the cookie when its older than 7 days because I remember the last time i hacked together cookie stuff it also felt very "wtf" in a javascript kinda way and I did not care to relive that pain. This was in place until the third party tool fixed the cookie lifetime for some weeks. - We list products on a marketplace. The marketplace has their own category system. We have our own category system. Frankly theirs kinda suck for our use case because it lumps a lot of stuff together, but we needed to "translate" the categories anyway. So I exported all unique "breadcrumbs" we have and gave that + the categories from the marketplace to an LLM one by one by looping through the list. I then had an apprentice from another dept. that has vastly more product knowledge than me look over that list in a day. Alternative would have been to have said apprentice do that stuff by hand, which is a task I would have personally HATED so I tried to lessen the burden for them. All these examples are free tier in whatever I used. We also use a vector search at work. 300,000 Products with weekly updates of the vector db. We pay 250€ / mo for all of the qdrant instances across all environments and like 5-10 € in openai tokens. And we can easily switch whatever embedding model we use at anytime. We can even selfhost a model. |
|
|
|
| ▲ | newsoftheday 11 hours ago | parent | prev | next [-] |
| [flagged] |
| |
| ▲ | econ 8 hours ago | parent [-] | | It's great in unfamiliar terrain. | | |
| ▲ | FloorEgg 7 hours ago | parent [-] | | It's great when the terrain is unfamiliar to the user but extremely familiar to the LLM. And it's useless in the opposite. The best programmers are going to be extremely familiar with terrains that are unfamiliar to the LLMs, which is why their views are so negative. These are people working on core parts of complex high performing highly scalable systems, and people with extreme appreciation for the craft of programming and code quality. But the most productive developers focused on higher level user value and functionality (e.g pumping out full stack apps or features), are more likely to be working with commonly used technologies while also jumping around between technologies as a means to a functionality or UX objective rather than an end of skill development, elegant code, or satisfying curiosity. I think this explains a lot of the difference in perspectives. LLMs offer value in the latter but not the former. It's a shame that so many of the people in one context can't empathize with the people in the other. |
|
|
|
| ▲ | lawlessone 10 hours ago | parent | prev [-] |
| you haven't contributed much to GitHub since 2022? *edit unless your commits are elsewhere? |