| ▲ | moron4hire 7 hours ago | |
AI Inevitablism. This is a concerning thing. Folks talk about AI as if it is a forgone conclusion. But it has yet to be demonstrated. I'm stuck between a rock and a hard place right now. I work at a company that claims its people are the source of its great work output, yet the key stakeholders for my particular project are constantly beating the "AI, use AI" drum. I've been trying to design a product that enhances our analysts abilities. A middle ground where the subject matter experts use AI to do the boring, manual labor kind of work that doesn't enrich anyone and just leads to our organization burning out junior analysts with overtime they'll never get compensated for. But my stakeholders keep beating that drum. "AI can do this work from front to back." To be clear, it can't. We've done the research to figure out that any sense that an LLM applied to the kind of work we do is only a dilettantism. It looks good if you are skimming the output, but drilling down deep there are massive problems. But that story, "AI is good now. What did you try last year? What model did you use? It can do so much now." Is pernicious. First of all, the models today I don't see producing anything functionally better; they just dress it up in better language. Second, that's not an actionable software engineering plan! "Oh, just wait a year, the AI will get better". Sure, it gets better at not completely shitting the bed before you coax it into doing a particular job. But it hasn't been getting better at being actually insightful, actually delivering on what our people with very deep experience can do just by rote, just by asking them, "what do you think of <insert competitor>'s capacity to deliver X compared to our ability to do same?" I feel like I'm living in crazytown. I evaluate AI capability much more than what my stakeholders do and they keep telling me "more AI!" If it weren't for my mortgage and my kids and my junior devs I'm desperately trying to protect, I would have quit months ago. | ||