| ▲ | jmalicki 4 hours ago | |||||||
It does have access to its thoughts. This is literally what thinking models do. They write out thoughts to a scratch pad (which you can see!) and use that as part of the prompt. | ||||||||
| ▲ | fc417fc802 4 hours ago | parent | next [-] | |||||||
It's important to be aware that while those "thoughts" can be a useful aid for human understanding they don't seem to reliably reflect what's going on under the hood. There are various academic papers on the matter or you can closely inspect the traces of a more logically oriented question for yourself and spot impossible inconsistencies. | ||||||||
| ▲ | mmoll 4 hours ago | parent | prev | next [-] | |||||||
It doesn’t mean that these “thoughts” influenced their final decision the way they would in humans. An LLM will tell you a lot of things it “considered” and its final output might still be completely independent of that. | ||||||||
| ||||||||
| ▲ | grey-area 4 hours ago | parent | prev | next [-] | |||||||
They do not in fact do that. The ‘thoughts’ are not a chain of logic. | ||||||||
| ▲ | sumeno 2 hours ago | parent | prev | next [-] | |||||||
You have a fundamental misunderstanding of what the model is doing. It's not your fault though, you're buying into the advertising of how it works | ||||||||
| ▲ | 4 hours ago | parent | prev [-] | |||||||
| [deleted] | ||||||||