▲ | caminanteblanco 5 days ago | |||||||||||||||||||
Ok, one issue I have with this analysis is the breakdown between input and output tokens. I'm the kind of person who spend most of my chat asking questions, so I might only use 20ish input tokens per prompt, where Gemini is having to put out several hundred, which would seem to affect the economics quite a bit | ||||||||||||||||||||
▲ | bcrosby95 5 days ago | parent | next [-] | |||||||||||||||||||
Yeah, I've noticed Chatgpt5 is very chatty. I can ask a 1 sentence question and get back 3-4 paragraphs, most of which I ignore, depending upon the task. | ||||||||||||||||||||
| ||||||||||||||||||||
▲ | pakitan 5 days ago | parent | prev | next [-] | |||||||||||||||||||
It may hurt them financially but they are fighting for market share and I'd argue short answers will drive users away. I prefer the long ones much more as they often include things I haven't directly asked about but are still helpful. | ||||||||||||||||||||
▲ | red2awn 5 days ago | parent | prev [-] | |||||||||||||||||||
It also didn't take into account a lot of the new models are reasoning models which spits out a lot of output tokens. |