| ▲ | idle_zealot 4 hours ago | |
There are mixed views here. Some are making the claim relevant to the Silver Bullet observation, than LLMs are cutting down time spent on non-essential work. But the view that's really driving hype is that the machine can do essential work, design the system for you, and implement it, explore the possibility space and make judgments about the tradeoffs, and make decisions. Now, can it actually do those things? Not in my estimation. But from the perspective of a less experienced developer it can sure look like it does. It is, after all, primarily a plausibility engine. I'm all for investing in integrating these generative tools into workflows, but as of yet they should not be given agency, or even the aesthetic appearance of agency. It's too tempting to the human brain to shut down when it looks like someone or something else is driving and you're just navigating and correcting. And eventually, with a few more breakthroughs in architecture maybe this tech actually will make digital people who can do all the programming work, and we can all retire (if we're still alive). Until then, we need to defend against sleepwalking into a future run by dumb plausibility-generators being used as accountability sinks. | ||
| ▲ | charcircuit 3 hours ago | parent [-] | |
>Now, can it actually do those things? Not in my estimation Just today I asked my clawbot to generate a daily report for me and it was able to build an entire scraping skill for itself to use for making the report. It designed it along with making decisions along the way including changing data sources when it realized one it was trying was blocking it as a bot. | ||