| ▲ | woeirua 3 hours ago | |
I love how the author took zero responsibility for anything that happened. Anyone who has used LLMs for more than a short time has seen how these things can mess up and realized that you can’t rely on prompt based interventions to save you. Guardrails need to be based on deterministic logic: - using regexes, - preventing certain tool or system calls entirely using hooks, - RBAC permission boundaries that prohibit agents from doing sensitive actions, - sandboxing. Agents need to have a small blast radius. - human in the loop for sensitive actions. This was just a colossal failure on the OPs part. Their company will likely go under as a result of this. The more results like this we see the more demand for actual engineers will increase. Skilled engineers that embrace the tooling are incredibly effective. Vibe coders who YOLO are one tool call away from total disaster. | ||
| ▲ | hn_c 16 minutes ago | parent [-] | |
[dead] | ||