▲ | dartos 4 days ago | |||||||||||||
People can communicate each step, and review each step as that communication is happening. LLMs must be prompted for everything and don’t act on their own. The value in the assertion is in preventing laymen from seeing a statistical guessing machine be correct and assuming that it always will be. It’s dangerous to put so much faith in what in reality is a very good guessing machine. You can ask it to retrace its steps, but it’s just guessing at what it’s steps were, since it didn’t actually go through real reasoning, just generated text that reads like reasoning steps. | ||||||||||||||
▲ | brookst 4 days ago | parent | next [-] | |||||||||||||
> since it didn’t actually go through real reasoning, just generated text that reads like reasoning steps. Can you elaborate on the difference? Are you bringing sentience into it? It kind of sounds like it from "don't act on their own". But reasoning and sentience are wildly different things. > It’s dangerous to put so much faith in what in reality is a very good guessing machine Yes, exactly. That's why I think it is good we are supplementing fallible humans with fallible LLMs; we already have the processes in place to assume that not every actor is infallible. | ||||||||||||||
| ||||||||||||||
▲ | ben_w 4 days ago | parent | prev | next [-] | |||||||||||||
> People can communicate each step, and review each step as that communication is happening. Can, but don't by default. Just as LLMs can be asked for chain of thought, but the default for most users is just chat. This behaviour of humans is why we software developers have daily standup meetings, version control, and code review. > LLMs must be prompted for everything and don’t act on their own And this is why we humans have task boards like JIRA, and quarterly goals set by management. | ||||||||||||||
▲ | vidarh 3 days ago | parent | prev | next [-] | |||||||||||||
LLMs "don't act on their own" because we only reanimate them when we want something from them. Nothing stops you from wiring up an LLM to keep generating, and feeding it sensory inputs to keep it processing. In other words, that's a limitation of the harness we put them in, not of LLMs. As for people communicating each step, we have plenty of experiments showing that it's pretty hard to get people to reliably report what they actually do as opposed to a rationalization of what they've actually done (e.g. split brain experiments have shown both your brain halves will happily lie about having decided to do things they haven't done if you give them reason to think they've done something) You can categorically not trust peoples reasoning about "why" they've made a decision to reflect what actually happened in their brain to make them do something. | ||||||||||||||
▲ | int_19h 3 days ago | parent | prev [-] | |||||||||||||
A human brain in a vat doesn't act on its own, either. |