Remix.run Logo
iamkrystian17 6 hours ago

Totally valid take. Models might have different prompting guidelines for best results. If a developer uses one tool and wants to optimize their config as much as possible for that specific tool, LNAI is probably not for them.

However given how many tools there are and how fast each tool moves, I find myself jumping between them quite often, just to see which ones I like most or if some tool have improved since I lasted checked it. In this case LNAI is very helpful.

embedding-shape 6 hours ago | parent [-]

> I find myself jumping between them quite often, just to see which ones I like most or if some tool have improved since I lasted checked it. In this case LNAI is very helpful.

Most prompts I do I execute in all four at the same time, and literally compare the git diff from their work, so I understand totally :) But even for comparison, I think using the same identical config for all, you're not actually seeing and understanding the difference because again, they need different system prompts. By using the same when you compare, you're not accurately seeing the best of each model.

iamkrystian17 6 hours ago | parent [-]

Fair point. LNAI does support per-tool config overrides in .ai/.{codex/claude/cursor/etc.} directories, so you kind of get the best of both worlds :) You can sync identical configs, while having the flexibility to define per-tool configs where needed, while keeping a single source of truth in the .ai/ directory.