Remix.run Logo
daxfohl a day ago

Yeah, I can't even get them to retain a simple state. I've tried having them run a maze, but instead of giving them the whole maze up front, I have them move one step at a time, tell them which directions are open from that square and ask for the next move, etc.

After a few moves they get hopelessly lost and just start wandering back and forth in a loop. Even when I prompt them explicitly to serialize a state representation of the maze after each step, and even if I prune the old context so they don't get tripped up on old state representations, they still get flustered and corrupt the state or lose track of things eventually.

They get the concept: if I explain the challenge and ask to write a program to solve such a maze step-by-step like that, they can do that successfully first-try! But maintaining it internally, they still seem to struggle.

nomadpenguin a day ago | parent | next [-]

There are specialized architectures (the Tolman-Eichenbaum Machine)* that are able to complete this kind of task. Interestingly, once trained, their activations look strikingly similar to place and grid cells in real brains. The team were also able to show (in a separate paper) that the TEM is mathematically equivalent to a transformer.

* https://www.sciencedirect.com/science/article/pii/S009286742...

Mallowram 8 hours ago | parent [-]

[dead]

kqr 14 hours ago | parent | prev | next [-]

My experience in trying to get them to play text adventures[1] is similar. I had to prompt with very specific leading questions to give them a decent chance of even recognising the main objective after the first few steps.

[1]: https://entropicthoughts.com/getting-an-llm-to-play-text-adv...

warrenm a day ago | parent | prev | next [-]

>I've tried having them run a maze, but instead of giving them the whole maze up front, I have them move one step at a time, tell them which directions are open from that square and ask for the next move, etc.

Presuming these are 'typical' mazes (like you find in a garden or local corn field in late fall), why not have the bot run the known-correct solving algorithm (or its mirror)?

daxfohl a day ago | parent [-]

Like I said, they can implement the algorithm to solve it, but when forced to maintain the state themselves, either internally or explicitly in the context, they are unable to do so and get lost.

Similarly if you ask to write a Sudoku solver, they have no problem. And if you ask an online model to solve a sudoku, it'll write a sudoku solver in the background and use that to solve it. But (at least the last time I tried, a year ago), if you ask to solve step-by-step using pure reasoning without writing a program, they start spewing out all kinds of nonsense (but humorously cheat: they'll still spit out the correct answer at the end).

prewett 18 hours ago | parent | next [-]

That’s because there are lots of maze-solving algorithms on the web, so it’s easy to spit one back at you. But since they don’t actually understand how solve a maze, or even apply an algorithm one step at a time, it doesn’t work well.

warrenm 20 minutes ago | parent | prev | next [-]

you do not need to remember state with the simplest solver:

- place your right hand on the right wall - walk forward, never letting your hand leave the wall - arrive at the exit

yes, you travel many dead ends along the way

but you are guaranteed to get to the end of a 'traditional' maze

adventured a day ago | parent | prev [-]

So if you push eg Claude Sonnet 4 or Opus 4.1 into a maze scenario, and have it record its own pathing as it goes, and then refresh and feed the next Claude the progress so far, would that solve for the inability to maintain long duration context in such maze cases?

I make Claude do that on every project. I call them Notes for Future Claude and have it write notes for itself because of how quickly context accuracy erodes. It tends to write rather amusing notes to itself in my experience.

daxfohl a day ago | parent | next [-]

This was from a few months ago, so things may be different now. I only used OpenAI, and the o3 model did by far the best (gpt-4o's performance was equivalent on the basic scenario when I had it just move one move at a time (which, it was still pretty good, all considered), but when I started having it summarize state and such, o3 was able to use that to improve performance, whereas 4o actually got worse).

But yeah, that's one of the things I tried. "Your turn is over. Please summarize everything you have learned about the maze so someone else can pick up where you left off". It did okay, but it often included superfluous information, it sometimes forgot to include current orientation (the maze action options were "move forward", "turn right", "turn left", so knowing the current orientation was important), and it always forgot to include instructions on how to interpret the state: in particular, which absolute direction corresponded to an increase or decrease of which grid index.

I even tried to coax it into defining a formal state representation and "instructions for an LLM to use it" up-front, to see if it would remember to include the direction/index correspondence, but it never did. It was amusing actually; it was apparent it was just doing whatever I told it and not thinking for itself. Something like

"Do you think you should include a map in the state representation? Would that be useful?"

"Yes, great idea! Here is a field for a map, and an algorithm to build it"

"Do you think a map would be too much information?"

"Yes, great consideration! I have removed the map field"

"No, I'm asking you. You're the one that's going to use this. Do you want a map or not?"

"It's up to you! I can implement it however you like!"

Mars008 16 hours ago | parent | prev [-]

> have it write notes for itself because of how quickly context accuracy erodes. It tends to write rather amusing notes to itself in my experience.

Just wondering would it help to ask it to write to someone else? Because model itself wasn't in its training set, this may be confusing.

yberreby 17 hours ago | parent | prev [-]

It took me a second to realize you were talking about prompting a LLM. This is fundamentally different from what the parent is doing. "AI" is so much more than "talking to a pretrained LLM."