| ▲ | ACCount37 3 days ago |
| Not really. The whole "inference errors will always compound" idea was popular in GPT-3.5 days, and it seems like a lot of people just never updated their knowledge since. It was quickly discovered that LLMs are capable of re-checking their own solutions if prompted - and, with the right prompts, are capable of spotting and correcting their own errors at a significantly-greater-than-chance rate. They just don't do it unprompted. Eventually, it was found that reasoning RLVR consistently gets LLMs to check themselves and backtrack. It was also confirmed that this latent "error detection and correction" capability is present even at base model level, but is almost never exposed - not in base models and not in non-reasoning instruct-tuned LLMs. The hypothesis I subscribe to is that any LLM has a strong "character self-consistency drive". This makes it reluctant to say "wait, no, maybe I was wrong just now", even if latent awareness of "past reasoning look sketchy as fuck" is already present within the LLM. Reasoning RLVR encourages going against that drive and utilizing those latent error-correction capabilities. |
|
| ▲ | jpcompartir 3 days ago | parent | next [-] |
| You seem to be responding to a strawman, and assuming I think something I don't think. As of today, 'bad' generations early in the sequence still do tend towards responses that are distant to the ideal response. This is testable/verifiable by pre-filling responses, which I'd advise you to experiment with for yourself. 'Bad' generations early in the output sequence are somewhat mitigatable by injecting self-reflection tokens like 'wait', or with more sophisticated test-time compute techniques. However, those remedies can simultaneously turn 'good' generations into bad, they are post-hoc heuristics which treat symptoms not causes. In general, as the models become larger they are able to compress more of their training data. So yes, using the terminology of the commenter I was responding to, larger models should tend to have fewer 'compression artefacts' than smaller models. |
| |
| ▲ | ACCount37 3 days ago | parent [-] | | With better reasoning training, the models mitigate more and more of that entirely by themselves. They "diverge into a ditch" less, and "converge towards the right answer" more. They are able to use more and more test-time compute effectively. They bring their own supply of "wait". OpenAI's in-house reasoning training is probably best in class, but even lesser naive implementations go a long way. |
|
|
| ▲ | Mallowram 3 days ago | parent | prev [-] |
| The problem is that language doesn't produce itself. Re-checking, correcting error is not relevant. Error minimization is not the fount of survival, remaining variable for tasks is. The lossy encyclopedia is neither here nor there, it's a mistaken path: "Language, Halliday argues, "cannot be equated with 'the set of all grammatical sentences', whether that set is conceived of as finite or infinite". He rejects the use of formal logic in linguistic theories as "irrelevant to the understanding of language" and the use of such approaches as "disastrous for linguistics"." |
| |
| ▲ | ACCount37 3 days ago | parent [-] | | Sorry, what? This is borderline incoherent. | | |
| ▲ | mallowdram 3 days ago | parent [-] | | The units themselves are meaningless without context. The point of existence, action, tasks is to solve the arbitrariness in language. Tasks refute language, not the other way around. This may be incoherent as the explanation is scientific, based in the latest conceptualization of linguistics. CS never solved the incoherence of language, conduit metaphor paradox. It's stuck behind language's bottleneck, and it do so willingly blind-eyed. | | |
| ▲ | ACCount37 3 days ago | parent [-] | | What? This is even less coherent. You weren't talking to GPT-4o about philosophy recently, were you? | | |
| ▲ | mallowdram 3 days ago | parent [-] | | I'd know cutting-edge linguistics and signaling theory well beyond Shannon to parse this, not NLP or engineering reduction. What I've stated is extremely coherent to Systemic Functional Linguists. Beyond this point engineers actually have to know what signaling is, rather than 'information.' https://www.sciencedirect.com/science/article/abs/pii/S00033... Ultimately, engineering chose the wrong approach to automating language, and it sinks the field. It's irreversible. | | |
| ▲ | morpheos137 3 days ago | parent | next [-] | | If not language what training substrate do you suggest? Also not strong ideas are expressible coherently. You have an ironic pattern in your comments of getting lost in the very language morass you propose to deprecate. If we don't train models on language what do we train them on? I have some ideas of my own but I am interested if you can clearly express yours. | | |
| ▲ | mallowdram 3 days ago | parent [-] | | Neural/spatial syntax. Analoga of differentials. The code to operate this gets built before the component. If language doesn't really mean anything, then automating it in geometry is worse than problematic. The solution is starting over at 1947: measurement not counting. | | |
| ▲ | morpheos137 3 days ago | parent [-] | | The semantic meaning of your words here is non-existent. It is unclear to me how else you can communicate in a text based forum if not by using words. Since you can't despite your best effort I am left to conclude you are psychotic and should probably be banned and seek medical help. | | |
| ▲ | mallowdram 3 days ago | parent [-] | | Engineers are so close-minded, you can't see the freight train bearing down on the industry. All to science's advantage replacing engineers. Interestingly, if you dissect that last entry, I've just made the case measurement (analog computation) is superior to counting (binary computation) and laid out the strategy how. All it takes is brains, or an LLM to decipher what it states. https://pmc.ncbi.nlm.nih.gov/articles/PMC3005627/ "First, cell assemblies are best understood in light of their output product, as detected by ‘reader-actuator’ mechanisms. Second, I suggest that the hierarchical organization of cell assemblies may be regarded as a neural syntax. Third, constituents of the neural syntax are linked together by dynamically changing constellations of synaptic weights (‘synapsembles’). Existing support for this tripartite framework is reviewed and strategies for experimental testing of its predictions are discussed." | | |
| ▲ | morpheos137 3 days ago | parent [-] | | I 100% agree analog computing would be better at simulating intelligence than binary. Why don't you state that rather than burying it under a mountain of psychobabble? | | |
| ▲ | mallowdram 2 days ago | parent [-] | | Listing the conditions, dichotomizing the frameworks counting/measurement is the farthest from psycho-babble. Anyone with knowledge of analog knows these terms. And enough to know analog doesn't simulate anything. And intelligence isn't what's being targeted. |
|
|
|
|
| |
| ▲ | ACCount37 3 days ago | parent | prev [-] | | One of the main takeaways from The Bitter Lesson was that you should fire your linguists. GPT-2 knows more about human language than any linguist could ever hope to be able to convey. If you're hitching your wagon to human linguists, you'll always find yourself in a ditch in the end. | | |
| ▲ | mallowdram 3 days ago | parent [-] | | Sorry, 2 billion years of neurobiology beats 60 years of NLP/LLMs which knows less to nothing about language since "arbitrary points can never be refined or defined to specifics" check your corners and know your inputs. The bill is due on NLP. | | |
|
|
|
|
|
|