▲ | flir 4 days ago | |||||||||||||||||||||||||
How did you get people to agree to training a chatbot on their sessions? That strikes me as extremely intimate text. Is it a "it's in the T&Cs" deal, or did you seek a separate opt-in? I'm askng because the answer will shed light on the level of privacy "the average consumer" is comfortable with. | ||||||||||||||||||||||||||
▲ | AndrewMPT 4 days ago | parent [-] | |||||||||||||||||||||||||
Great question, and I fully agree — privacy in mental health is sacred. We don’t train on user chats directly. Instead, we collaborate with a team of 42 certified psychologists who work with us to curate anonymized case structures, decision trees, and response strategies based on real but depersonalized therapeutic experience. These professionals help us model how psychological support is provided — without ever using actual user conversations. Our system is trained on synthesized, anonymized session data that reflects best practices, not private logs. It’s not buried in the T&Cs — we’re very explicit about our commitment to data ethics and user safety. No session data is used for model training, and user interaction is fully confidential and never stored in a way that links it to identities. Our goal is to make high-quality support available without compromising trust. Let me know if you’d like more technical or ethical detail — happy to share! | ||||||||||||||||||||||||||
|