| ▲ | maxloh 5 hours ago | ||||||||||||||||||||||||||||
Gemini 3 seems to have a much smaller token output limit than 2.5. I used to use Gemini to restructure essays into an LLM-style format to improve readability, but the Gemini 3 release was a huge step back for that particular use case. Even when the model is explicitly instructed to pause due to insufficient tokens rather than generating an incomplete response, it still truncates the source text too aggressively, losing vital context and meaning in the restructuring process. I hope the 3.1 release includes a much larger output limit. | |||||||||||||||||||||||||||||
| ▲ | esafak 5 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
People did find Gemini very talkative so it might be a response to that. | |||||||||||||||||||||||||||||
| ▲ | NoahZuniga 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
Output limit has consistently been 64k tokens (including 2.5 pro). | |||||||||||||||||||||||||||||
| ▲ | jayd16 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
> Even when the model is explicitly instructed to pause due to insufficient tokens Is there actually a chance it has the introspection to do anything with this request? | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | MallocVoidstar 4 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
> Even when the model is explicitly instructed to pause due to insufficient tokens rather than generating an incomplete response AI models can't do this. At least not with just an instruction, maybe if you're writing some kind of custom 'agentic' setup. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||