Remix.run Logo
K0nserv 3 days ago

Yes I've seen issues with both, but in part what's tricky about false negatives is also that you don't necessarily realise they are there. In the systems I've worked on we've made it simple for operators to verify the work the LLM has done, but this only guards against false positives, which are less problematic.

I've had pretty good success using LLMs for coding and in some ways they are perfect for that. False positives are usually obvious and false negatives don't matter because as long as the LLM finds a solution, it's not a huge deal if there was a better way to do it. Even when the LLM cannot solve the problem at all, it usually produces some useful artifacts for the human to build on.

infecto 2 days ago | parent | next [-]

That’s fair and I typically have utilized LLM workflows where I believe the current gen of models shine. Classifications, data structuring, summarization, etc.

birn559 2 days ago | parent | prev [-]

> as long as the LLM finds a solution, it's not a huge deal if there was a better way to do it

It might not matter short term, but midterm such debt becomes a huge burden.