▲ | bradly 19 hours ago | ||||||||||||||||
> arguing with an LLM in this way is a waste of time I wasn't arguing. I was asking it what it thought it was doing because I was assumed. The waste of time was from before this up to this point. I could have given up at 30 minutes, or an hour, but these darn llms are always so close and maybe just one more prompt... | |||||||||||||||||
▲ | Yizahi 15 hours ago | parent | next [-] | ||||||||||||||||
LLM programs can't describe what they are doing. The tech doesn't allow this. LLM can generate you a text which will resemble what LLM would be doing if that was hypothetically possible. A good example has been published by Anthropic recently - they program LLM to add two integers. It outputs correct answer. Then they program it to write steps which LLM executed to do that addition. LLM of course starts generating the primary school algorithm, with adding one pair of digits, carry 1 if needed, adding next pair of digits, add 1, combine result, then next digits etc. But in reality it calculates addition using probabilities, like any other generated tokens. Anthropic even admitted it in that same article, that LLM was bullshitting them. Same with your query, it just generated you a most likely text which was in the input data. It is unable to output what it actually did. | |||||||||||||||||
▲ | subscribed 18 hours ago | parent | prev [-] | ||||||||||||||||
The look up how LLM is generating its answers :) Next time just rephrase your problem. | |||||||||||||||||
|