| ▲ | user- 8 hours ago | ||||||||||||||||
I am a believer that everyone should have their main flow be model/provider agnostic at a high level. I often run out of claude tokens and use GLM-5 as backup. https://gist.github.com/ManveerBhullar/7ed5c01a0850d59188632... simple script i use to toggle which backend my claude code is using | |||||||||||||||||
| ▲ | bob1029 8 hours ago | parent | next [-] | ||||||||||||||||
I tried the agnostic thing for a while, but there are enough quirks between the providers that I gave up trying to normalize it. GPT5.x wipes the floor with other models for my specific tool calling scenarios. I am not going to waste time trying to bridge arbitrary and evolving gaps between providers. I put my Amex details into OAI, I get tokens, it just works. I really don't understand what the hell is going on with Claude. The $200/m thing is so confusing to me. I'd rather just go buy however many tokens I plan to use. $200 worth of OAI tokens would go a really long way for me (much longer than a month), but perhaps I am holding it wrong. | |||||||||||||||||
| ▲ | fastball 8 hours ago | parent | prev | next [-] | ||||||||||||||||
Being model and provider agnostic are orthogonal concerns. e.g. you can run Claude models on AWS Bedrock giving you provider choice for the same model. Whether or not you need model agnosticism at that point seems like a very different question. | |||||||||||||||||
| |||||||||||||||||
| ▲ | cyanydeez 8 hours ago | parent | prev | next [-] | ||||||||||||||||
Interesting; do you find they actually react the same way to the harness? | |||||||||||||||||
| |||||||||||||||||
| ▲ | boxingdog 8 hours ago | parent | prev [-] | ||||||||||||||||
[dead] | |||||||||||||||||