| ▲ | ikidd 7 hours ago | |
Last time I used Gemini I watched it burn tokens at three times the rate of any other models arguing with itself and it rarely produced a result. This was around Christmas or shortly after. Has that BS stopped? | ||
| ▲ | DefineOutside 6 hours ago | parent | next [-] | |
It's still not uncommon for it to escape it's thinking block accidentally and be unable to end it's response, or for it to call the same tool repeatedly. I've watched it burn 50 million tokens in a loop before killing the chat. | ||
| ▲ | kaycey2022 6 hours ago | parent | prev [-] | |
No. It's still shit. It can do some well contained tasks, but it is very less usable on production codebases than gpt or claude models. Mainly because of the usage limits and the lack of good environments for us to use it on. Anthropic gets away with this because claude code, as bad as it is, is still quite functional. Gemini cli and antigravity are utter trash in comparison. | ||