Remix.run Logo
afc 2 days ago

My thinking is that over time I can incrementally codify many of these individual "taste" components as prompts that each review a change and propose suggestions.

For example, a single prompt could tell an llm to make sure a code change doesn't introduce mutability when the same functionality can be achieved with immutable expressions. Another one to avoid useless log statements (with my specific description of what that means).

When I want to evaluate a code change, I run all these prompts separately against it, collecting their structured (with MCP) output. Of course, I incorporate this in my code-agent to provide automated review iterations.

If something escapes where I feel the need to "manually" provide context, I add a new prompt (or figure out how to extend whichever one failed).