▲ | Ask HN: AI code assistants and privacy/security | |
1 points by slimebot80 7 hours ago | ||
There's a lot of pressure to try every new flavour of LLM coding assistants. Every day I see programming influencers reporting on their new favourite AI tool. Everyone seems happy to let AIs soak up as much "context" as they can in order to get the best results. My question is - how do we know what is being uploaded and accessed? EG: Recently there's been chat about Cursor moving outside of a project directory and deleting folders. Which gets me thinking... If you ask an AI to perform an action, or update some content, how do you limit the "context"? What tooling can you use to provide fences around what AI accesses? What LLM tools have the best track record on this? Is everyone just asking LLMs about single files and expecting it to limit itself to related files? How do you trust it to stick to a "project" folder... which is a bit of a vague concept given it's usually just a folder on a wider filesystem that could also be accessed for "context". |