| ▲ | crystal_revenge 7 hours ago | |||||||
> ... for AI to be used effectively. I'm continually fascinated by the huge differences in individual ability to produce successful results with AI. I always assumed that one of the benefits of AI was "anyone can do this". Then I realized a lot of people I interact with don't really understand the problem they're trying to solve all that well, and have some irrational belief that they can get AI to brute force their way to a solution. For me I don't even use the more powerful models (just Sonnet 4.6) and have yet to have a project not come out fairly successful in a short period of time. This includes graded live coding examples for interviews, so there is at least some objective measurement that these are functional. Strangely I find traditional software engineers, especially experienced ones, are generally the worst at achieving success. They often treat working with an agent too much like software engineering and end up building bad software rather than useful solutions to the core problem. | ||||||||
| ▲ | gopher_space 7 hours ago | parent | next [-] | |||||||
> I always assumed that one of the benefits of AI was "anyone can do this". Then I realized a lot of people I interact with don't really understand the problem they're trying to solve all that well I've been through a handful of "anyone can do this" epiphanies since the 90s and have come to realize the full statement should be "anyone can do this if they care about the problem space". | ||||||||
| ▲ | lamasery 6 hours ago | parent | prev | next [-] | |||||||
"AI" tools I've got at work (and am mandated to use, complete with usage tracking) aren't a wide-open field of options like what someone experimenting on their own time might have, so I'm stuck with whatever they give me. The projects are brown-field, integrate with obscure industry-specific systems, are heavy with access-control blockers, are already in-flight with near-term feature completion expectations that leave little time for going back and filling in the stuff LLMs need to operate well (extensive test suites, say), and must not wreck the various databases they need to interact with, most of which exist only as a production instance. I'm sure I could hack together some simple SaaS products with goals and features I'm defining myself in a weekend with these tools all on my own (no communication/coordination overhead, too!), though. I mean for an awful lot of potential products I could do that with just Rails and some gems and no LLM any time I liked over the last 15+ years or whatever, but now I could do it in Typescript or Rust or Go et c. with LLMs, for whatever that's worth. At work, with totally different constraints, the results are far less dramatic and I can't even feasibly attempt to apply some of the (reputedly) most-productive patterns of working with these things. Meanwhile, LLMs are making all the code-adjacent stuff like slide decks, diagrams, and ticket trackers, incredibly spammy. [EDIT] Actually, I think the question "why didn't Rails' extreme productivity boost in greenfield tiny-team or solo projects translate into vastly-more-productive development across all sectors where it might have been relevant, and how will LLMs do significantly better than that?" is one I'd like to see, say, a panel of learned LLM boosters address. Not in a shitty troll sort of way, I mean their exploration of why it might play out differently would actually be interesting to me. | ||||||||
| ||||||||
| ▲ | QuadmasterXLII 7 hours ago | parent | prev | next [-] | |||||||
If every project you have tackled has come out successful, then you are managing to never tackle a problem that is secretly literally impossible, which is a property of whatever prefilter you are applying to potential problems. Given that your prefilter has no false positives, the main bit of missing information is how many false negatives it has. | ||||||||
| ▲ | pegasus 6 hours ago | parent | prev | next [-] | |||||||
> graded live coding examples for interviews Yeah, for those you can just relax and trust the vibes. It's for complex software projects you need those software engineering chops, otherwise you end up with a intractable mess. | ||||||||
| ||||||||
| ▲ | alfalfasprout 7 hours ago | parent | prev [-] | |||||||
> Strangely I find traditional software engineers, especially experienced ones, are generally the worst at achieving success. They often treat working with an agent too much like software engineering and end up building bad software rather than useful solutions to the core problem. This feels a bit like a strawman. How do you assess it to be bad software without being an engineer yourself? What constitutes successful for you? If anything, AI tools have revealed that a lot of people have hubris about building software. With non-engineers believing they're creating successful work without realizing it's a facade of a solution that's a ticking time bomb. | ||||||||
| ||||||||