| ▲ | angadsg 8 hours ago | ||||||||||||||||||||||||||||||||||
Hi, engineer on this add-in. Fair concern but we never train on any of our business or enterprise user data, or if you have opted-out of training on your ChatGPT account. | |||||||||||||||||||||||||||||||||||
| ▲ | Avicebron 8 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
Forgive my ignorance. How do you folks manage context retention? Say if someone had a sensitive excel document they wanted inference done over, how is that data actually sent to the model and then stored or deleted? It seems one of the biggest barriers to people's adoption is concern over data leaving their ecosystem and then not being protected or being retained in some way. Is this is an SLA that a small or medium sized company could get? | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | Acmeon 8 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
Yeah, I was expecting that you do not train on business or enterprise user data. However, I am not just worried about "training", but also about "sharing". Furthermore, I am worried about cases where an individual has chosen to integrate an add-in and then inadvertently leaks sensitive data. However, it may be important to note that these security considerations are relevant for most Office Add-Ins (and not just the ChatGPT add-in). | |||||||||||||||||||||||||||||||||||