| ▲ | spogbiper 3 days ago |
| I am working on a project that uses LLM to pull certain pieces of information from semi-structured documents and then categorize/file them under the correct account. it's about 95% accurate and we haven't even begun to fine tune it. i expect it will require human in the loop checks for the foreseeable future, but even with a human approval of each item, its going to save the clerical staff hundreds of hours per year. There are a lot of opportunities in automating/semi-automating processes like this, basically just information extraction and categorization tasks. |
|
| ▲ | systemerror 3 days ago | parent | next [-] |
| The big issue with LLMs is that they’re usually right — like 90% of the time — but that last 10% is tough to fix. A 10% failure rate might sound small, but at scale, it's significant — especially when it includes false positives. You end up either having to live with some bad results, build something to automatically catch mistakes, or have a person double-check everything if you want to bring that error rate down. |
| |
| ▲ | f3b5 3 days ago | parent | next [-] | | Depending on the use case, a 10% failure rate can be quite acceptable. This is of course for non-critical applications, like e.g. top-of-funnel sales automation. In practice, for simple uses like labeling data at scale, I'm actually reaching 95-99% accuracy in my startup. | |
| ▲ | spogbiper 3 days ago | parent | prev [-] | | yes, the entire design relies on a human to check everything. basically it presents what it thinks should be done, and why. the human then agrees or does not. much work is put into streamlining this but ultimately its still human controlled | | |
| ▲ | wredcoll 3 days ago | parent [-] | | At the risk of being obvious, this seems set up for failure in the same way expecting a human to catch an automated car's mistakes is. Although I assume mistakes here probably don't matter very much. | | |
| ▲ | LPisGood 3 days ago | parent | next [-] | | This reminds me the issue with the old windows access control system. If those prompts pop up constantly asking for elevated privileges, this is actually worse because it trains people to just reflexively allow elevation. | |
| ▲ | spogbiper 3 days ago | parent | prev [-] | | yes, mistakes are not a huge problem. they will become evident farther down the process and they happen now with the human only system. worst case is the LLM fails and they just have to do the manual work that they are doing now |
|
|
|
|
| ▲ | whatever1 3 days ago | parent | prev | next [-] |
| All of the AI projects promise that they just need some fine tuning to go from poc to actual workable product. Nobody was able to fine tune them. Sorry this is some bull. Either it works or it doesn’t. |
|
| ▲ | LPisGood 3 days ago | parent | prev | next [-] |
| > its going to save the clerical staff hundreds of hours per year How many hundreds of hours is your team spending to get there? What is the ROI on this vs investing that money elsewhere? |
| |
| ▲ | spogbiper 3 days ago | parent [-] | | Can't speak to the financial benefit over other investment. Total dev/testing time looks to be fairly small in comparison to time saved in even one year, although with different salaries etc I cannot be too certain on the money ratio. Ultimately not my direct concern, but those making decisions are very happy with results so far and looking for additional processes to apply this type of system to. |
|
|
| ▲ | kjkjadksj 3 days ago | parent | prev | next [-] |
| Isn’t that something you can do with non ai tooling to 100% accuracy? |
| |
| ▲ | spogbiper 3 days ago | parent [-] | | in some similar cases yes, and this client has tried to accomplish that for literally decades without success. i don't want to be too detailed for reasons, but basically they cannot standardize the input to the point where anything non AI has been able to parse it very well. |
|
|
| ▲ | beepbooptheory 3 days ago | parent | prev [-] |
| How will you know in practice which 5% is wrong? |
| |
| ▲ | spogbiper 3 days ago | parent [-] | | the system presents a summary that a human has to approve, with everything laid out to make that as easy as possible, links to all the sources etc |
|