| ▲ | sdevonoes 10 hours ago |
| Any engineer (any person actually) can “learn to use AI” in a couple of days. It’s not rocket science; there’s no chance of left behind. If you haven’t use LLMs at all, a weekend would be enough to be on par with everyone else in the industry |
|
| ▲ | rstuart4133 5 hours ago | parent | next [-] |
| Others are disagreeing with you here, and I do too. The difference is profound, and takes more than a couple of days to get your head around the implications. I'd summarise it as: "if you give a computer the same input it always produces the same output, but if you give a model the same input it always produces different output". Add to that the output is often wrong and it can't reliably follow instructions, and the difference is so great it breaks most of your intuitions. The reward working with this piece of unreliable jelly is it can be far smarter than you (think the difference between a man with a shovel and a 20 ton excavator - they can literally find bugs in minutes that would take a human hours or days), and they know far more than you. The engineering challenge is to make this near random machine produce a reliable product. It isn't easy. The hype you see around them is it's trivially easy to get it to produce a feature rich but very unreliable product, as Anthropic demonstrates with their vibe coded claude-cli. I refuse to use it now. Among its other charms, it triggers a BSOD on windows: https://github.com/anthropics/claude-code/issues/30137 (Granted, it's just another Windows bug: https://learn.microsoft.com/en-ca/answers/questions/5814272/..., but if you are shipping to Windows you should be working around such bugs.) |
|
| ▲ | giancarlostoro 10 hours ago | parent | prev | next [-] |
| The better you are at architecting or even directing a junior developer, the better your output too. Dont let AI make decisions, its supposed to take your decisions and turn those into code. When AI makes decisions, well the unexpected outcome is always on you. |
| |
| ▲ | sshine 10 hours ago | parent | next [-] | | > Dont let AI make decisions, its supposed to take your decisions and turn those into code. I let the AI make decisions all the time. I often approve them, and I sometimes revert them. Most of the time they’re really good decisions based on my initial intent, but followed by analysis I didn’t make but agree with. | | |
| ▲ | JeremyNT 9 hours ago | parent [-] | | I think there's a spectrum of where to draw the line. There's clearly some level where you want a human making decisions for even the most vibey of project, because without some kind of a spec about what you're trying to build and what features you want you'd get nonsense. But like... maybe don't stress the details too much. | | |
| ▲ | sshine 8 hours ago | parent [-] | | > clearly some level where you want a human making decisions Yes, clearly. There was a meme out there, "just make something cool idk". Statements like "Don't let AI make decisions" are made because of the loss of control we experience as mechanical parts of our work (such as writing to files) gets automated. |
|
| |
| ▲ | pydry 10 hours ago | parent | prev [-] | | i always found it to be easier to write code myself than to direct a junior developer. the level of teaching involved would always mean the overall velocity of work slowed down. some people say you can throw them the drudge work but i find that if you're doing coding right (e.g. you dont let your code base degenerate into a mess of boilerplate), there is barely any drudge work to do. | | |
| ▲ | giancarlostoro 7 hours ago | parent | next [-] | | You're missing the real goal of directing a Junior, which is you're teaching them to be a team player, Junior devs will surpass your expectations, the rate at which they goof or are about to goof should decrease over time the more you mentor them. If you do it right, you not have a strong ally and coder under your belt, or would you rather someone else teach them their bad habits? | |
| ▲ | CamperBob2 10 hours ago | parent | prev [-] | | i always found it to be easier to write code myself than to direct a junior developer. Me, too. But that doesn't mean I'm a great developer, just a shitty manager. | | |
| ▲ | cassianoleal 9 hours ago | parent [-] | | Perhaps but at least when you are directing a junior developer, even if badly, you'll eventually get a non-junior developer on the other side. With an AI agent, you'll get ... what? | | |
| ▲ | CamperBob2 8 hours ago | parent [-] | | With current models, you're right, there will be nothing to show for the effort except the code itself. I suspect that will change sooner or later. Models will be cultivated over time the way we cultivate full-time employees now, with an acquired awareness of what they're building, new skills picked up in the process, and insight into how the larger system works. |
|
|
|
|
|
| ▲ | simonw 9 hours ago | parent | prev | next [-] |
| Firmly disagree. Learning how to use these tools effectively is unintuitively difficult. They're great at some stuff and terrible at other stuff in ways that are very hard to predict. I'm figuring out new and better ways to use them in a daily basis, and I've been an almost daily user for nearly three years. |
| |
| ▲ | ASalazarMX 7 hours ago | parent | next [-] | | They're difficult and hard to predict because they're still primitive, despite what their companies say. When (or if) they get advanced enough to deliver consistently, there will be no chance of being left behind, because even a kid will be able to use them effectively. Right now they're still at the gimmick level, although a very impressive one. | | |
| ▲ | simonw 6 hours ago | parent [-] | | If the models get to a point of total consistency there's still a LOT that we need to figure out and learn about how to use them. Let's say models can exactly and correctly write any code you ask of them. - How do you break down a project into a sequence of requests to models? - How can you most effectively parallelize the work - models will never be instant, so there will always be benefits in working out how best to use several agents at once - Now that the models can handle the details of Lean, and Swift-UI, and Oracle stored procedures, and thousands of other technologies that you never got around to learning in the past... what can you do with those and how do you pick which projects to go after? - How do you collaborate with other engineers and designers and product people in a world where you can churn out the right code reliably in a few minutes? The models we have today are already effective enough to change the shape of our work as software engineers. As the models continue to improve figuring out and adapting to whatever that new shape is becomes even more complicated. |
| |
| ▲ | dude250711 9 hours ago | parent | prev | next [-] | | If these tools stopped drastically improving, what justifies the crazy valuations? | | | |
| ▲ | Rekindle8090 3 hours ago | parent | prev [-] | | [dead] |
|
|
| ▲ | elevatortrim 10 hours ago | parent | prev | next [-] |
| Just learn, sure. But the difference between my efficiency of using it on my day 2 and month 6 is significant. Yet I feel I am barely scratching the surface of it. |
|
| ▲ | embedding-shape 10 hours ago | parent | prev | next [-] |
| > a weekend would be enough to be on par with everyone else in the industry I kind of agree in general that it is a learned skill, but considering how unclear people generally are when they communicate, I'm guessing it'll take longer than a weekend to be able to catch up, especially catch up to people who've been working on precise and careful communication and language for years already in a professional environment. |
|
| ▲ | bluegatty 10 hours ago | parent | prev | next [-] |
| A weekend is enough to get going, but not nearly enough to 'be on par' with everyone else. That said - what we have learned in the last year could be compressed quite a lot - there are a lot steps we could skip, and 'learn by failure' that need not be repeated. It takes a while to get the subtleties of it, it's among the most highly nuanced things we've ever encountered. |
|
| ▲ | irishcoffee 10 hours ago | parent | prev | next [-] |
| /thread If one has been reading a wide variety of books/papers/articles/whatever their whole life, and one has been mindful of how to communicate with the "written word" as it were, it takes about 3 hours to be wildly effective with this technology. I think it took longer to learn google-fu than it did to learn how to use this technology effectively. |
|
| ▲ | archagon 7 hours ago | parent | prev [-] |
| The unspoken (and utterly antisocial) subtext is "we are aiming for an exponential leveraging of our labor and complete domination of the market." |