Remix.run Logo
libraryofbabel 3 hours ago

I was interested (and slightly disappointed) to read that the knowledge cutoff for Gemini 3 is the same as for Gemini 2.5: January 2025. I wonder why they didn't train it on more recent data.

Is it possible they use the same base pre-trained model and just fine-tuned and RL-ed it better (which, of course, is where all the secret sauce training magic is these days anyhow)? That would be odd, especially for a major version bump, but it's sort of what having the same training cutoff points to?

simonw 3 hours ago | parent [-]

The model card says: https://storage.googleapis.com/deepmind-media/Model-Cards/Ge...

> This model is not a modification or a fine-tune of a prior model.

I'm curious why they decided not to update the training data cutoff date too.

stocksinsmocks 2 hours ago | parent [-]

Maybe that date is a rule of thumb for when AI generated content became so widespread that it is likely to have contaminated future data. Given that people have spoofed authentic Reddit users with Markov chains, it probably doesn’t go back nearly far enough.