| ▲ | bennettdixon 2 days ago | |
Nice write up, one thing that stood out is the V2 to V3 jump. One of my clients is integrating personal wellness & AI, and we took a slightly different route. The health data and personal data live in separate dbs with an encrypted mapping layer between. This way the model only sees health context attached to a unique pseudo-user level session. Your problem almost seems harder, because the PII is the signal/context. One challenge we are facing is re-identification, e.g rich-health profiles being identifiable in themselves. Curious if you have thought about that side of things with your V3 implementation? | ||
| ▲ | n00pn00p 2 days ago | parent [-] | |
That's a great point. Because my tool is designed for security operations and triage, the context (like knowing an IP is from Hetzner, or a domain is a known burner) is actually the signal the LLM needs to do its job. I made a conscious trade-off to allow some contextual metadata to pass through to preserve utility. Since I'm based in the Netherlands, I look at this strictly through the lens of the Dutch privacy law (the AVG). Under the AVG, there's a hard line between anonymized data and pseudonymized data. Because of the exact 'mosaic effect' you mentioned, pseudonymized data is legally still treated as personal data. So, the re-identification risk is an accepted reality. Essentialy i treat the tool as an extra effort to reduce PII leaks. But its not foolproof against the context clues. | ||