▲ | rebeccaskinner a day ago | |||||||||||||||||||
Looking at my own use of AI, and at how I see other engineers use it, it often feels like two steps forward and two steps back, and overall not a lot of real progress yet. I see people using agents to develop features, but the amount of time they spend to actually make the agent do the work usually outweighs the time they’d have spent just building the feature themselves. I see people vibe coding their way to working features, but when the LLM gets stuck it takes long enough for even a good developer to realize it and re-engage their critical thinking that it can wipe out the time savings. Having an LLM do code and documentation review seems to usually be a net positive to quality, but that’s hard to sell as a benefit and most people seem to feel like just using the LLM to review things means they aren’t using it enough. Even for engineers there are a lot of non-engineering benefits in companies that use LLMs heavily for things like searching email, ticketing systems, documentation sources, corporate policies, etc. A lot of that could have been done with traditional search methods if different systems had provided better standardized methods of indexing and searching data, but they never did and now LLMs are the best way to plug an interoperability gap that had been a huge problem for a long time. My guess is that, like a lot of other technology driven transformations in how work gets done, AI is going to be a big win in the long term, but the win is going to come on gradually, take ongoing investment, and ultimately be the cumulative result of a lot of small improvements in efficiency across a huge number of processes rather than a single big win. | ||||||||||||||||||||
▲ | ernst_klim a day ago | parent | next [-] | |||||||||||||||||||
> the amount of time they spend to actually make the agent do the work usually outweighs the time they’d have spent just building the feature themselves Exactly my experience. I feel like LLMs have potential as Expert Systems/Smart websearch, but not as a generative tool, neither for code nor for text. You spend more time understanding stuff than writing code, and you need to understand what you commit with or without LLM. But writing code is easier that reviewing, and understanding by doing is easier than understanding by reviewing (bc you get one particular thing at the time and don't have to understand the whole picture at once). So I have a feeling that agents do even have negative impact. | ||||||||||||||||||||
| ||||||||||||||||||||
▲ | breakpointalpha 18 hours ago | parent | prev | next [-] | |||||||||||||||||||
Your millage may vary, but I just got Cursor (using Claude 4 Sonnet) to one shot a sequence of bash scripts that cleanup stale AWS resources. I pasted the Jira ticket description that I wrote, with a few examples and the script works perfectly. Saved me a few hours of bash writing and debugging because I can read bash, but not write it well. It seems that the smaller the task and the more tightly defined the input and output, the better the LLMs are at one-shotting. | ||||||||||||||||||||
| ||||||||||||||||||||
▲ | DanielHB 20 hours ago | parent | prev | next [-] | |||||||||||||||||||
I have found out that the limit of LLMs good use of coding abilities is basically what can be reasonably done as a single copy-paste. Usually only individual functions. I basically use it for google on steroids for obscure topics, for simple stuff I still use normal search engines. | ||||||||||||||||||||
▲ | insane_dreamer 14 hours ago | parent | prev [-] | |||||||||||||||||||
I've found it to be a significant productivity boost but only for a small subset of problems. (Things like bash scripts, which are tedious to write and I'm not that great at bash. Or fixing small bugs in a React app, a framework I'm not well versed in. But even then I have to keep my thinking cap on so it doesn't go off the rails.) It works best when the target is small and easily testable (without the LLM being able to fudge the tests, which it will do.) For many other tasks it's like training an intern, which is worth it if the intern is going to grow and take on more responsibility and learn to do things correctly. But since the LLM doesn't learn from its mistakes, it's not a clear and worthwhile investment. |