Remix.run Logo
foxfired a day ago

I disagree. When the tool promises to do something, you end up trusting it to do the thing.

When Tesla says their car is self driving, people trust them to self drive. Yes, you can blame the user for believing, but that's exactly what they were promised.

> Why didn't the lawyer who used ChatGPT to draft legal briefs verify the case citations before presenting them to a judge? Why are developers raising issues on projects like cURL using LLMs, but not verifying the generated code before pushing a Pull Request? Why are students using AI to write their essays, yet submitting the result without a single read-through? They are all using LLMs as their time-saving strategy. [0]

It's not laziness, its the feature we were promised. We can't keep saying everyone is holding it wrong.

[0]: https://idiallo.com/blog/none-of-us-read-the-specs

rolandog a day ago | parent [-]

Very well put. You're promised Artificial Super Intelligence and shown a super cherry-picked promo and instead get an agent that can't hold its drool and needs constant hand-holding... it can't be both things at the same time, so... which is it?