Remix.run Logo
systemerror 3 days ago

The big issue with LLMs is that they’re usually right — like 90% of the time — but that last 10% is tough to fix. A 10% failure rate might sound small, but at scale, it's significant — especially when it includes false positives. You end up either having to live with some bad results, build something to automatically catch mistakes, or have a person double-check everything if you want to bring that error rate down.

f3b5 3 days ago | parent | next [-]

Depending on the use case, a 10% failure rate can be quite acceptable. This is of course for non-critical applications, like e.g. top-of-funnel sales automation. In practice, for simple uses like labeling data at scale, I'm actually reaching 95-99% accuracy in my startup.

spogbiper 3 days ago | parent | prev [-]

yes, the entire design relies on a human to check everything. basically it presents what it thinks should be done, and why. the human then agrees or does not. much work is put into streamlining this but ultimately its still human controlled

wredcoll 3 days ago | parent [-]

At the risk of being obvious, this seems set up for failure in the same way expecting a human to catch an automated car's mistakes is. Although I assume mistakes here probably don't matter very much.

LPisGood 3 days ago | parent | next [-]

This reminds me the issue with the old windows access control system.

If those prompts pop up constantly asking for elevated privileges, this is actually worse because it trains people to just reflexively allow elevation.

spogbiper 3 days ago | parent | prev [-]

yes, mistakes are not a huge problem. they will become evident farther down the process and they happen now with the human only system. worst case is the LLM fails and they just have to do the manual work that they are doing now