▲ | AndrewMPT 4 days ago | ||||||||||||||||
Great question, and I fully agree — privacy in mental health is sacred. We don’t train on user chats directly. Instead, we collaborate with a team of 42 certified psychologists who work with us to curate anonymized case structures, decision trees, and response strategies based on real but depersonalized therapeutic experience. These professionals help us model how psychological support is provided — without ever using actual user conversations. Our system is trained on synthesized, anonymized session data that reflects best practices, not private logs. It’s not buried in the T&Cs — we’re very explicit about our commitment to data ethics and user safety. No session data is used for model training, and user interaction is fully confidential and never stored in a way that links it to identities. Our goal is to make high-quality support available without compromising trust. Let me know if you’d like more technical or ethical detail — happy to share! | |||||||||||||||||
▲ | flir 4 days ago | parent [-] | ||||||||||||||||
That's a first rate response - and a very thoughtful way to preserve anonymity. Thanks, I appreciate it. Those decision trees sound interesting - are you, essentially, integrating an LLM and an expert system? | |||||||||||||||||
|