Remix.run Logo
embedding-shape 6 hours ago

Hmm, maybe it's just me, but it's a good thing the different agents use different files, different models needs different prompts. Using the same system/user prompts across all three will just give you slightly worse results in one of them, instead of getting the best results you can from each one of them. At least for the general steering system prompts.

Then for the application specific documentation, I'd understand you'd want to share it, as it stays the same for all agents touching the same codebase. But easily solved by putting it in DESIGN.md or whatever and appending "Remember to check against DESIGN.md before changing the architecture" or similar.

scosman 3 hours ago | parent | next [-]

It's great to have the option to optimize for different models, but I'm not going to on 99% of projects. And a good chunk of the agent docs are model agnostic (how to run linter, test libraries/practices). It's cool to have a way to reuse easily, even if that's copying AGENTS.md into the right places.

embedding-shape 3 hours ago | parent [-]

> And a good chunk of the agent docs are model agnostic (how to run linter, test libraries/practices).

Personally I put stuff like that in the readme, since it's useful stuff for humans too, not directions just for machines, and I'm mostly building for other humans. The lighter and smaller the AGENTS.md end up, the better the models are at following it too, from what I can tell.

iamkrystian17 6 hours ago | parent | prev [-]

Totally valid take. Models might have different prompting guidelines for best results. If a developer uses one tool and wants to optimize their config as much as possible for that specific tool, LNAI is probably not for them.

However given how many tools there are and how fast each tool moves, I find myself jumping between them quite often, just to see which ones I like most or if some tool have improved since I lasted checked it. In this case LNAI is very helpful.

embedding-shape 6 hours ago | parent [-]

> I find myself jumping between them quite often, just to see which ones I like most or if some tool have improved since I lasted checked it. In this case LNAI is very helpful.

Most prompts I do I execute in all four at the same time, and literally compare the git diff from their work, so I understand totally :) But even for comparison, I think using the same identical config for all, you're not actually seeing and understanding the difference because again, they need different system prompts. By using the same when you compare, you're not accurately seeing the best of each model.

iamkrystian17 6 hours ago | parent [-]

Fair point. LNAI does support per-tool config overrides in .ai/.{codex/claude/cursor/etc.} directories, so you kind of get the best of both worlds :) You can sync identical configs, while having the flexibility to define per-tool configs where needed, while keeping a single source of truth in the .ai/ directory.