| ▲ | VadimPR 2 hours ago | |
These security failures from Anthropic lately reveal the caveats of only using AI to write code - the safety an experienced engineer is not matched by an LLM just yet, even if the LLM can seemingly write code that is just as good. Or in short, if you give LLMs to the masses, they will produce code faster, but the quality overall will degrade. Microsoft, Amazon found out this quickly. Anthropic's QA process is better equipped to handle this, but cracks are still showing. | ||
| ▲ | squeegmeister 2 hours ago | parent [-] | |
Anthropic has a QA process? I run into bugs on the regular, even on the "stable" release channel | ||