|
| ▲ | forgotpwd16 11 hours ago | parent | next [-] |
| >I'll upload the session data probably tomorrow so you could see exactly what was done. That'll be dope. The tokens used (input,output,total) are actually saved within codex's jsonl files. |
|
| ▲ | storystarling 10 hours ago | parent | prev | next [-] |
| That 19 EUR figure is basically subscription arbitrage. If you ran that volume through the API with xhigh reasoning the cost would be significantly higher. It doesn't seem scalable for non-interactive agents unless you can stay on the flat-rate consumer plan. |
| |
| ▲ | embedding-shape 9 hours ago | parent [-] | | Yeah, no way I'd do this if I paid per token. Next experiment will probably be local-only together with GPT-OSS-120b which according to my own benchmarks seems to still be the strongest local model I can run myself. It'll be even cheaper then (as long as we don't count the money it took to acquire the hardware). | | |
| ▲ | mercutio2 6 hours ago | parent [-] | | What toolchain are you going to use with the local model? I agree that’s a Strong model, but it’s so slow for be with large contexts I’ve stopped using it for coding. | | |
|
|
|
| ▲ | soiltype 11 hours ago | parent | prev [-] |
| Thank you in advance for that! I barely use AI to generate code so I feel pretty lost looking at projects like this. |