| ▲ | rvz 7 hours ago | |
It isn't. Code generation progression in LLMs still carries higher objective risk of failure depending on the experience on the person using it because: 1. They still do not trust if the code works (even if it has tests) thus, needs thorough human supervision and still requires on-going maintainance. 2. Hence (2) it can cost you more money than the tokens you spent building it in the first place when it goes horribly wrong in production. Image generation progression comes with close to no operational impact, and has far less human supervision and can be safely done with none. | ||