Remix.run Logo
avhception 6 hours ago

When an agent just plows ahead with a wrong interpretation or understanding of something, I like to ask them why they didn't stop to ask for clarification. Just a few days ago, while refactoring minor stuff, I had an agent replace all sqlite-related code in that codebase with MariaDB-based code. Asked why that happened, the answer was that there was a confusion about MariaDB vs. sqlite because the code in question is dealing with, among other things, MariaDB Docker containers. So the word MariaDB pops up a few times in code and comments.

I then asked if there is anything I could do to prevent misinterpretations from producing wild results like this. So I got the advice to put an instruction in AGENTS.md that would urge agents to ask for clarification before proceeding. But I didn't add it. Out of the 25 lines of my AGENTS.md, many are already variations of that. The first three:

- Do not try to fill gaps in your knowledge with overzealous assumptions.

- When in doubt: Slow down, double-check context, and only touch what was explicitly asked for.

- If a task seems to require extra changes, pause and ask before proceeding.

If these are not enough to prevent stuff like that, I don't know what could.

Sevii 6 hours ago | parent | next [-]

Are agents actually capable of answering why they did things? An LLM can review the previous context, add your question about why it did something, and then use next token prediction to generate an answer. But is that answer actually why the agent did what it did?

gas9S9zw3P9c 6 hours ago | parent | next [-]

It depends. If you have an LLM that uses reasoning the explanation for why decisions are made can often be found in the reasoning token output. So if the agent later has access to that context it could see why a decision was made.

Kubuxu 5 hours ago | parent [-]

Reasoning, in majority of cases, is pruned at each conversation turn.

DonHopkins 4 hours ago | parent [-]

The cursor-mirror skill and cursor_mirror.py script lets you search through and inschpekt all of your chat histories, all of the thinking bubbles and prompts, all of the context assembly, all of the tool and mcp calls and parameters, and analyze what it did, even after cursor has summarized and pruned and "forgotten" it -- it's all still there in the chat log and sqlite databases.

cursor-mirror skill and reverse engineered cursor schemas:

https://github.com/SimHacker/moollm/tree/main/skills/cursor-...

cursor_mirror.py:

https://github.com/SimHacker/moollm/blob/main/skills/cursor-...

  The German Toilet of AI

  "The structure of the toilet reflects how a culture examines itself." — Slavoj Zizek

  German toilets have a shelf. You can inspect what you've produced before flushing. French toilets rush everything away immediately. American toilets sit ambivalently between.

  cursor-mirror is the German toilet of AI.

  Most AI systems are French toilets — thoughts disappear instantly, no inspection possible. cursor-mirror provides hermeneutic self-examination: the ability to interpret and understand your own outputs.

  What context was assembled?
  What reasoning happened in thinking blocks?
  What tools were called and why?
  What files were read, written, modified?

  This matters for:

  Debugging — Why did it do that?
  Learning — What patterns work?
  Trust — Is this skill behaving as declared?
  Optimization — What's eating my tokens?

  See: Skill Ecosystem for how cursor-mirror enables skill curation.
----

https://news.ycombinator.com/item?id=23452607

According to Slavoj Žižek, Germans love Hermeneutic stool diagnostics:

https://www.youtube.com/watch?v=rzXPyCY7jbs

>Žižek on toilets. Slavoj Žižek during an architecture congress in Pamplona, Spain.

>The German toilets, the old kind -- now they are disappearing, but you still find them. It's the opposite. The hole is in front, so that when you produce excrement, they are displayed in the back, they don't disappear in water. This is the German ritual, you know? Use it every morning. Sniff, inspect your shits for traces of illness. It's high Hermeneutic. I think the original meaning of Hermeneutic may be this.

https://en.wikipedia.org/wiki/Hermeneutics

>Hermeneutics (/ˌhɜːrməˈnjuːtɪks/)[1] is the theory and methodology of interpretation, especially the interpretation of biblical texts, wisdom literature, and philosophical texts. Hermeneutics is more than interpretive principles or methods we resort to when immediate comprehension fails. Rather, hermeneutics is the art of understanding and of making oneself understood.

----

Here's an example cursor-mirror analysis of an experiment with 23 runs with four agents playing several turns of Fluxx per run (1 run = 1 completion call), 1045+ events, 731 tool calls, 24 files created, 32 images generated, 24 custom Fluxx cards created:

Cursor Mirror Analysis: Amsterdam Fluxx Championship -- Deep comprehensive scan of the entire FAFO tournament development:

amsterdam-flux CURSOR-MIRROR-ANALYSIS.md:

https://github.com/SimHacker/moollm/blob/main/skills/experim...

amsterdam-flux simulation runs:

https://github.com/SimHacker/moollm/tree/main/skills/experim...

mkesper 2 hours ago | parent [-]

Just an update re German toilets: No toilet set up in the last 30 years (I know of) uses a shelf anymore. This reduces water usage by about 50% per flush.

DonHopkins an hour ago | parent [-]

But then what do you have to talk about all day??!

Onavo 3 hours ago | parent | prev | next [-]

Well, the entire field of explainable AI has mostly thrown in the towel..

bananapub 3 hours ago | parent | prev [-]

of course not, but it can often give a plausible answer, and it's possible that answer will actually happen to be correct - not because it did any - or is capable of any - introspection, but because it's token outputs in response to the question might semi-coincidentally be a token input that changes the future outputs in the same way.

bandrami 6 hours ago | parent | prev | next [-]

Isn't that question a category error? The "why" the agent did that is that it was the token that best matched the probability distribution of the context and the most recent output (modulo a bit of randomness). The response to that question will, again, be the tokens that best match the probability distribution of the context (now including the "why?" question and the previous failed attempt).

tibbar 6 hours ago | parent [-]

if the agent can review its reasoning traces, which i think is often true in this era of 1M token context, then it may be able to provide a meaningful answer to the question.

bandrami 6 hours ago | parent [-]

Wait, no, that's the category error I'm talking about. Any answer other than "that was the most likely next token given the context" is untrue. It is not describing what actually happened.

tibbar 5 hours ago | parent | next [-]

I think this statement is on the same level as "a human cannot explain why they gave the answer they gave because they cannot actually introspect the chemical reactions in their brain." That is true, but a human often has an internal train of thought that preceded their ultimate answer, and it is interesting to know what that train of thought was.

In the same way, it is often quite instructive to know what the reasoning trace was that preceded an LLM's answer, without having to worry about what, mechanically, the LLM "understood" about the tokens, if this is even a meaningful question.

bandrami 5 hours ago | parent [-]

But it's not a reasoning trace. Models could produce one if they were designed to (an actual stack of the calls and the states of the tensors with each call, probably with a helpful lookup table for the tokens) but they specifically haven't been made to do that.

rocqua 5 hours ago | parent [-]

When you put an LLM in reasoning mode, it will approximately have a conversation with itself. This mimics an inner monologue.

That conversation is held in text, not in any internal representation. That text is called the reasoning trace. You can then analyse that trace.

bandrami 5 hours ago | parent [-]

Unless things have changed drastically in the last 4 months (the last time I looked at it) those traces are not stored but reconstructed when asked. Which is still the same problem.

ehsanu1 5 hours ago | parent | next [-]

They aren't necessarily "stored" but they are part of the response content. They are referred to as reasoning or thinking blocks. The big 3 model makers all have this in their APIs, typically in an encrypted form.

Reconstruction of reasoning from scratch can happen in some legacy APIs like the OpenAI chat completions API, which doesn't support passing reasoning blocks around. They specifically recommend folks to use their newer esponses API to improve both accuracy and latency (reusing existing reasoning).

tibbar 5 hours ago | parent | prev [-]

For a typical coding agent, there are intermediate tool call outputs and LLM commentary produced while it works on a task and passed to the LLM as context for follow up requests. (Hence the term agent: it is an LLM call in a loop.) You can easily see this with e.g. Claude Code, as it keeps track of how much space is left in the context and requires "context compaction" after the context gradually fills up over the course of a session.

In this regard, the reasoning trace of an agent is trivially accessible to clients, unlike the reasoning trace of an individual LLM API call; it's a higher level of abstraction. Indeed, I implemented an agent just the other day which took advantage of this. The OP that you originally replied to was discussing an agentic coding process, not an individual LLM API call.

bandrami 3 hours ago | parent [-]

Well, right, I see those reasoning stages in reasoning models with Ollama and if you ask it what its reasoning was after the fact what it says is different than what it said at the time.

dash2 6 hours ago | parent | prev | next [-]

There can be higher- and lower-level descriptions of the same phenomenon. when the kettle boils, it’s because the water molecules were heated by the electric element, but it’s also because I wanted a cup of tea.

ChrisGreenHeur 5 hours ago | parent [-]

the llm has no wants

rafaelmn 5 hours ago | parent | prev | next [-]

> Any answer other than "that was the most likely next token given the context" is untrue.

"Because the matrix math resulted in the set of tokens that produced the output". "Because the machine code driving the hosting devices produced the output you saw". "Because the combination of silicon traces and charges on the chips at that exact moment resulted in the output". "Because my neurons fired in a particular order/combination".

I don't see how your statement is any more useful. If an LLM has access to reasoning traces it can realistically waddle down the CoT and figure out where it took a wrong turn.

Just like a human does with memories in context - does't mean that's the full story - your decision making is very subconscious and nonverbal - you might not be aware of it, but any reasoning you give to explain why you did something is bound to be an incomplete story, created by your brain to explain what happened based on what it knows - but there's hidden state it doesn't have access to. And yet we ask that question constantly.

ChrisGreenHeur 5 hours ago | parent [-]

well, do you want something useful or something true?

the word why is used to get something true.

rocqua 5 hours ago | parent | prev [-]

If you want to be pedantic about it you could phrase it as follows.

When the LLM was in reasoning mode, in the reasoning context it often expressed statement X. Given that, and the relevance of statement X to the taken action. It seems likely that the presence of statement X in the context contributed to this action. Besides, the presence of statement X in the reasoning likely means that given the previous context embeddings of X are close to the context.

Hence we think that the action was taken due to statement X.

And that output could have come from an LLM introspecting it's own reasoning.

I don't think that phrasing things so pedanticaly is worth the extra precision though. Especially not for the statement that inspecting the reasoning logs of sn LLM can help give insight on why an LLM acted a certain way.

tomashubelbauer 5 hours ago | parent | prev | next [-]

Just this morning I have run across an even narrower case of how AGENTS.md (in this case with GPT-5.3 Codex) can be completely ignored even if filled with explicit instructions.

I have a line there that says Codex should never use Node APIs where Bun APIs exist for the same thing. Routinely, Claude Code and now Codex would ignore this.

I just replaced that rule with a TypeScript-compiler-powered AST based deterministic rule. Now the agent can attempt to commit code with banned Node API usage and the pre-commit script will fail, so it is forced to get it right.

I've found myself migrating more and more of my AGENTS.md instructions to compiler-based checks like these - where possible. I feel as though this shouldn't be needed if the models were good, but it seems to be and I guess the deterministic nature of these checks is better than relying on the LLM's questionable respect of the rules.

iamflimflam1 4 hours ago | parent | next [-]

Not that much different from humans.

We have pre-commit hooks to prevent people doing the wrong thing. We have all sorts of guardrails to help people.

And the “modern” approach when someone does something wrong is not to blame the person, but to ask “how did the system allow this mistake? What guardrails are missing?”

MITSardine an hour ago | parent | prev [-]

I wonder if some of these could be embedded in the write tool calls?

4 hours ago | parent | prev | next [-]
[deleted]
sensanaty an hour ago | parent | prev | next [-]

I really hate that the anthropomorphizing of these systems has successfully taken hold in people's brains. Asking it why it did something is completely useless because you aren't interrogating a person with a memory or a rationale, you’re querying a statistical model that is spitting out a justification for a past state it no longer occupies.

Even the "thinking" blocks in newer models are an illusion. There is no functional difference between the text in a thought block and the final answer. To the model, they are just more tokens in a linear sequence. It isn't "thinking" before it speaks, the "thought" is the speech.

Treating those thoughts as internal reflection of some kind is a category error. There is no "privileged" layer of reasoning happening in the silicon that then gets translated into the thought block. It’s a specialized output where the model is forced to show its work because that process of feeding its own generated strings back into its context window statistically increases the probability of a correct result. The chatbot providers just package this in a neat little window to make the model's "thinking" part of the gimmick.

I also wouldn't be surprised if asking it stuff like this was actually counter productive, but for this I'm going off vibes. The logic being that by asking that, you're poisoning the context, similar to how if you try generate an image by saying "It should not have a crocodile in the image", it will put a crocodile into the image. By asking it why it did something wrong, it'll treat that as the ground truth and all future generation will have that snippet in it, nudging the output in such a way that the wrong thing itself will influence it to keep doing the wrong thing more and more.

mustaphah an hour ago | parent | prev | next [-]

This is like trying to fix hallucination by telling LLM not to hallucinate.

geraneum 6 hours ago | parent | prev | next [-]

> So I got the advice to put an instruction in AGENTS.md that would urge agents to ask for clarification before proceeding.

You may want to ask the next LLM versions the same question after they feed this paper through training.

lebuin 5 hours ago | parent | prev | next [-]

It seems like LLMs in general still have a very hard time with the concepts of "doubt" and "uncertainty". In the early days this was very visible in the form of hallucinations, but it feels like they fixed that mostly by having better internal fact-checking. The underlying problem of treating assumptions as truth is still there, just hidden better.

avhception 5 hours ago | parent | next [-]

Doubt and uncertainty is left for us humans.

hnbad 5 hours ago | parent | prev [-]

LLMs are basically improv theater. If the agent starts out with a wildly wrong assumption it will try to stick to it and adapt it rather than starting over. It can only do "yes and", never "actually nevermind, let me try something else".

I once had an agent come up with what seemed like a pointlessly convoluted solution as it tried to fit its initial approach (likely sourced from framework documentation overemphasizing the importance of doing it "the <framework> way" when possible) to a problem for which it to me didn't really seem like a good fit. It kept reassuring me that this was the way to go and my concerns were invalid.

When I described the solution and the original problem to another agent running the same model, it would instantly dismiss it and point out the same concerns I had raised - and it would insist on those being deal breakers the same way the other agent had dimissed them as invalid.

In the past I've often found LLMs to be extremely opinionated while also flipping their positions on a dime once met with any doubt or resistance. It feels like I'm now seeing the opposite: the LLM just running with whatever it picked up first from the initial prompt and then being extremely stubborn and insisting on rationalizing its choice no matter how much time it wastes trying to make it work. It's sometimes better to start a conversation over than to try and steer it in the right direction at that point.

delaminator 36 minutes ago | parent | prev [-]

so many times have ended up here :

"You're absolutely correct. I should have checked my skills before doing that. I'll make sure I do it in the future."