▲ | croshan 10 days ago | ||||||||||||||||
An interpretation that makes sense to me: humans are non-deterministic black boxes already at the core of complex systems. So in that sense, replacing a human with AI is not unreasonable. I’d disagree, though: humans are still easier to predict and understand (and trust) than AI, typically. | |||||||||||||||||
▲ | sdesol 10 days ago | parent | next [-] | ||||||||||||||||
With humans we have a decent understanding of what they are capable of. I trust a medical professional to provide me with medical advice and an engineer to provide me with engineering advice. With LLM, it can be unpredictable at times, and they can make errors in ways that you would not imagine. Take the following examples from my tool, which shows how GPT-4o and Claude 3.5 Sonnet can screw up. In this example, GPT-4o cannot tell that GitHub is spelled correctly: https://app.gitsense.com/?doc=6c9bada92&model=GPT-4o&samples... In this example, Claude cannot tell that GitHub is spelled correctly: https://app.gitsense.com/?doc=905f4a9af74c25f&model=Claude+3... I still believe LLM is a game changer and I'm currently working on what I call a "Yes/No" tool which I believe will make trusting LLMs a lot easier (for certain things of course). The basic idea is the "Yes/No" tool will let you combine models, samples and prompts to come to a Yes or No answer. Based on what I've seen so far, a model can easily screw up, but it is unlikely that all will screw up at the same time. | |||||||||||||||||
| |||||||||||||||||
▲ | 9 days ago | parent | prev [-] | ||||||||||||||||
[deleted] |