| ▲ | anon721656321 21 hours ago | |
The issue is reliability. would you be willing to guarantee that some automation process will never mess up, and if/when it does, compensate the user with cash. For a compiler, with a given set of test suites, the answer is generally yes, and you could probably find someone willing to insure you for a significant amount of money, that a compilation bug will not screw up in a such a large way that it will affect your business. For a LLM, I have a believing that anyone will be willing to provide that same level of insurance. If a LLM company said "hey use our product, it works 100% of the time, and if it does fuck up, we will pay up to a million dollars in losses" I bet a lot of people would be willing to use it. I do not believe any sane company will make that guarantee at this point, outside of extremely narrow cases with lots of guardrails. That's why a lot of ai tools are consumer/dev tools, because if they fuck up, (which they will) the losses are minimal. | ||