| ▲ | beepbooptheory 10 days ago |
| Can you actually like follow through with this line? I know there are literally tens of thousands of comments just like this at this point, but if you have chance, could you explain what you think this means? What should we take from it? Just unpack it a little bit for us. |
|
| ▲ | croshan 10 days ago | parent | next [-] |
| An interpretation that makes sense to me: humans are non-deterministic black boxes already at the core of complex systems. So in that sense, replacing a human with AI is not unreasonable. I’d disagree, though: humans are still easier to predict and understand (and trust) than AI, typically. |
| |
| ▲ | sdesol 10 days ago | parent | next [-] | | With humans we have a decent understanding of what they are capable of. I trust a medical professional to provide me with medical advice and an engineer to provide me with engineering advice. With LLM, it can be unpredictable at times, and they can make errors in ways that you would not imagine. Take the following examples from my tool, which shows how GPT-4o and Claude 3.5 Sonnet can screw up. In this example, GPT-4o cannot tell that GitHub is spelled correctly: https://app.gitsense.com/?doc=6c9bada92&model=GPT-4o&samples... In this example, Claude cannot tell that GitHub is spelled correctly: https://app.gitsense.com/?doc=905f4a9af74c25f&model=Claude+3... I still believe LLM is a game changer and I'm currently working on what I call a "Yes/No" tool which I believe will make trusting LLMs a lot easier (for certain things of course). The basic idea is the "Yes/No" tool will let you combine models, samples and prompts to come to a Yes or No answer. Based on what I've seen so far, a model can easily screw up, but it is unlikely that all will screw up at the same time. | | |
| ▲ | visarga 9 days ago | parent [-] | | It's actually a great topic - both humans and LLMs are black boxes. And both rely on patterns and abstractions that are leaky. And in the end it's a matter of trust, like going to the doctor. But we have had extensive experience with humans, it is normal to have better defined trust, LLMs will be better understood as well. There is no central understander or truth, that is the interesting part, it's a "Blind men and the elephant" situation. | | |
| ▲ | sdesol 9 days ago | parent [-] | | We are entering the nondeterministic programming era in my opinion. LLM applications will be designed with the idea that we can't be 100% sure and what ever solution can provide the most safe guards, will probably be the winner. |
|
| |
| ▲ | 9 days ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | og_kalu 9 days ago | parent | prev | next [-] |
| Because people are not saying "let's replace Casio Calculators with interfaces to GPT!" By and large, the processes people are scrambling to place LLMs in are ones that typical machines struggle or fail and humans excel or do decently (and that LLMs are making some headway in). There's no point comparing LLM performance to some hypothetical perfect understanding machine that doesn't exist. It's nonsensical actually. You compare it to the performance of the beings it's meant to replace or augment - humans. Replacing non-deterministic black boxes with potentially better performing non-deterministic black boxes is not some crazy idea. |
|
| ▲ | brookst 9 days ago | parent | prev | next [-] |
| Sure. I mean, humans are very good at building businesses and technologies that are resilient to human fallibility. So when we think of applications where LLMs might replace or augment humans, it’s unsurprising that their fallible nature isn’t a showstopper. Sure, EDA tools are deterministic, but the humans who apply them are not. Introducing LLMs to these processes is not some radical and scary departure, it’s an iterative evolution. |
| |
| ▲ | beepbooptheory 9 days ago | parent [-] | | Ok yeah. I think the thing that trips me up with this argument then is just, yes, when you regard humans in a certain neuroscientific frame and consider things like consciousness or language or will, they are fundamentally nondeterministic. But that isn't the frame of mind of the human engineer who does the work or even validates it. When the engineer is working, they aren't seeing themselves as some black box which they must feed input and get output, they are thinking about the things in themselves, justifying to themselves and others their work. Just because you can place yourself in some hypothetical third person here, one that oversees the model and the human and says "huh yeah they are pretty much the same, huh?", doesn't actually tell us anything about whats happening on the ground in either case, if you will. At the very least, this same logic would imply fallibility is one dimensional and always statistical; "the patient may be dead, but at least they got a new heart." Like isn't in important to be in love, not just be married? To borrow some Kant, shouldn't we still value what we can do when we think as if we aren't just some organic black box machines? Is there even a question there? How could it be otherwise? Its really just that the "in principle" part of the overall implication with your comment and so many others just doesn't make sense. Its very much cutting off your nose to spite your face. How could science itself be possible, much less engineering, if this is how we decided things? If we regarded ourselves always from the outside? How could even be motivated to debate whether we get the computers to design their own chips? When would something actually happen? At some point, people do have ideas, in a full, if false, transparency to themselves, that they can write down and share and explain. This is not only the thing that has gotten us this far, it is the very essence of why these models are so impressive in the certain ways that they are. It doesn't make sense to argue for the fundamental cheapness of the very thing you are ultimately trying to defend. And it imposes this strange perspective where we are not even living inside our own (phenomenal) minds anymore, that it fundamentally never matters what we think, no matter our justification. Its weird! I'm sure you have a lot of good points and stuff, I just am simply pointing out that this particular argument is maybe not the strongest. | | |
| ▲ | brookst 9 days ago | parent [-] | | We start from similar places but get to very different conclusions. I accept that I’m fallible, both in my areas of expertise and in all the meta stuff around it. I code bugs. I omit requirements. Not often, and there are mental and technical means to minimize, but my work, my org’s structure, my company’s processes are all designed to mitigate human fallibility. I’m not interested in “defending” AI models. I’m just saying that their weaknesses are qualitatively similar to human weaknesses, and as such, we are already prepared to deal with those weaknesses as long as we are aware of them, and as long as we don’t make the mistake of thinking that because they use transistors they should be treated like a mostly deterministic piece of software where one unit test pass means it is good. I think you’re reading some kind of value judgement on consciousness into what is really just a pragmatic approach to slotting powerful but imperfect agents into complex systems. It seems obvious to me, and without any implications as to human agency. |
|
|
|
| ▲ | therealcamino 10 days ago | parent | prev | next [-] |
| I took it to be a joke that the description "slow, expensive, non-deterministic black boxes" can apply to the engineers themselves. The engineers would be the ones who would have to place LLMs at the core of the system. To anyone outside, the work of the engineers is as opaque as the operation of LLMs. |
|
| ▲ | dangsux 10 days ago | parent | prev [-] |
| [dead] |