Remix.run Logo
timtimmy 4 hours ago

Google keeps changing their privacy and “don’t train on my data/code” options. When gemini-cli launched, there was a clear toggle for “don’t train on my code.” That’s now gone; it just links to a generic privacy page for me. Maybe something with my account changed, I can't figure it out. Deep in the Cloud Gemini console, there’s another setting that might control training, but it’s not clear what products it actually covers.

Trying to pay for Gemini-3 is confusing. Maybe an AI Ultra personal subscription? I already pay for OpenAI and Anthropic’s pro/max plans and would happily pay Google too. But the only obvious option is a $250/month tier, and its documentation indicates Google can train on your code unless you find and enable the correct opt-out. If that opt-out exists in all the products, it’s not obvious where it lives or what products it applies to.

Workspace complicates it further. Google advertises that with business workspace accounts your data isn’t used for training. So, I was going to try Antigravity on our codebase. At this point I know I can't trust Google, so I read the ToS carefully. They train on your prompts and source code, and there doesn't appear to be a way to pay them and opt out right now. Be careful, paying for Google Workspace does not protect you, always read the ToS.

Be careful with AI-studio and your Google Workspace accounts. They train on your prompts unless you switch it to API mode.

The result is a lot of uncertainty. I genuinely have no idea how to pay Google for Gemini without risking my code being used for training. And if I do pay, I can’t tell whether they’ll train on my prompts anyway.

The marketing for their coding products does not clearly state when they do or do not train on your prompts and code.

I had to run deep research to understand the risks with using Gemini 3 for agentic work, and I still don't feel confident that I understand the risks. I might have said some incorrect things above, but I am just so confused. I feel like I have a <75% grasp on the situation.

I don't have a lot of trust. And honestly, this feels confusing and deceptive. One could easily confuse it as deliberate strategy to gather training data through ambiguity and dark patterns, it certainly looks like this could be Google's strategy to win the AI race. I assume this is just how it looks, and that they aren't being evil on purpose.

OpenAI in particular has my trust. They get it. They are carefully building the customer experience, they are product and customer driven from the top.

bossyTeacher 3 hours ago | parent [-]

>OpenAI in particular has my trust.

I wouldn't trust Sam Altman. Or any of the big players really.

fishmicrowaver 2 hours ago | parent [-]

> trust

Hahaha...HAHAhaha. HAHAHHAHAHAHAHAHAHA!!!