| ▲ | jaccola 18 hours ago | |
- This has been going on for well over a year now. - They always write relatively long, zealous explainers of how productive they are (including some replies to your comment). These two points together make me think: why do they care so much to convince me; why don't they just link me to the amazing thing they made, that would be pretty convincing?! Are they being paid or otherwise incentivised to make these hyperbolic claims? To be fair they don't often look like vanilla LLM output but they do all have the same structure/patter to them. | ||
| ▲ | drogus 14 hours ago | parent | next [-] | |
I think it's a mix of people being actually hyped and wishing this is the future. For me, productivity gains are mostly in areas where I don't have expertise (but the downside, of course, is I don't learn much if I let AI do the work) or when I know it's a throwaway thing and I absolutely don't care about the quality. For example, I'm bedtime reading a series of books for my daughter, and one of them doesn't have a Polish translation, and the Polish publisher stopped working with the author. I vibe coded an app that will extract an epub, translate each of the chapters, and package it back to an epub, with a few features like: saving the translations in sqlite, so the translation can be stopped and resumed, ability to edit translations, add custom instructions etc. It's only ~1000 lines of Rust code, but Claude generated it when I was doing dinner (I just checked progress and prompted next steps every few minutes). I can guarantee that it would take me at least an evening of coding, probably debugging problems along the way, to make it work. So while I know it's limited in a way it still lacks in certain scenarios (novel code in niche technology, very big projects etc), it is kinda game changer in other scenarios. It lets me do small tools that I just wouldn't have time to do otherwise. So I guess what I'm saying is, even with all the limitations, I kinda understand the hype. That said, I think some people may indeed exaggerate LLMs capabilities, unless they actually know some secret recipe to make them do all those awesome hyped things (but then I would love to see that). | ||
| ▲ | Pxtl 12 hours ago | parent | prev | next [-] | |
Hilariously the only impressive thing I've ever heard of made in AI was Yegge's "GasTown" which is a Kubernetes like orchestrator... for AI agents. And half of it seemed to be a workaround for "the agents keep stopping so I need another agent to monitor another agent to monitor another agent to keep them on-task". | ||
| ▲ | evilduck 12 hours ago | parent | prev | next [-] | |
> why do they care so much to convince me; Someone might share something for a specific audience which doesn't include you. Not everything shared is required to be persuasive. Take it or leave it. > why don't they just link me to the amazing thing they made, that would be pretty convincing?! 99.99% of the things I've created professionally don't belong to me and I have no desire or incentives to create or deal with owning open source projects on my own time. Honestly, most things I've done with AI aren't amazing either, it's usually boring routine tasking, they're just done more cost efficiently. If you flip the script, it's just as damning. "Hey, here's some general approaches that are working well for me, check it out" is always being countered by the AI skeptics for years now as "you're lying and I won't even try it and you're also a bot or a paid shill". Look at basically every AI related post and there's almost always someone ready to call BS within the first few minutes of it being posted. | ||
| ▲ | jennyholzer4 15 hours ago | parent | prev [-] | |
[dead] | ||