Remix.run Logo
SkyPuncher 4 days ago

What's the use case for this compared to "standard" Codacy? What problem is solved by running this at code generation time vs the standard PR based feedback?

How do you avoid "context pollution" when the LLM inevitably cycles on an issue? I've specifically disable Cursor's "fix linter errors" feature because it constantly clogs up context.

jaimefjorge 4 days ago | parent [-]

Hi there. Codacy runs in the cloud when PRs acre created. We run a large number of tools and we have gates, and coding standards, etc. It’s a standardization use case. Codacy Guardrails is about local code analysis with a special focus on coding agents. The problem is that AI generates insecure code and if you don’t have Codacy centrally analyzing things, you’ll introduce vulnerabilities in your repo.

On context pollution unfortunately we rely a lot on the model actually being used. One thing we do is: clear instructions to only analyze the code being produced and not act on ALL issues/problems identified. Still we recommend a good small selection of tools to start and go from there: an SCA (mandatory really), a secret scanner and a good curated list of security issues. If we feed too many issues to the models they.. well.. don’t work