Remix.run Logo
rafaelmn 10 hours ago

I'm still paying the 10$ GH copilot but I don't use it because :

  - context is aggressively trimmed compared to CC obviously for cost saving reasons, so the performance is worse
  - the request pricing model forces me to adjust how I work
Just these alone are not worth saving the 60$/month for me.

I like the VSCode integration and the MCP/LSP usage surprised me sometimes over the dumb grep from CC. Ironically VSCode is becoming my terminal emulator of choice for all the CLI agents - SSH/container access and the automatic port mapping, etc. - it's more convenient than tmux sessions for me. So Copilot would be ideal for me but yeah it's just tweaked for being budget/broad scope tool rather than a tool for professionals that would pay to get work done.

lbreakjai 10 hours ago | parent | next [-]

You can use your GH subscription with a different harness. I'm using opencode with it, it turns GH into a pure token provider. The orchestration (compacting, etc.) is left to the harness.

It turns it into a very good value for money, as far as I'm concerned.

rafaelmn 9 hours ago | parent | next [-]

But you still get charged per turn right ? I don't like that because it impacts my workflow. When I was last using it I would easily burn through the 10$ plan in two days just by iterating on plans interactively.

lbreakjai 9 hours ago | parent [-]

Honestly I'm not sure, I'm on my company's plan, I get a progress bar vaguely filling, but no idea of the costs or billing under the hood.

sourcecodeplz 6 hours ago | parent | prev [-]

But you still get the reduced context-window.

briHass 10 hours ago | parent | prev [-]

Disagree entirely.

GHCP at least is transparent about the pricing: hit enter on a prompt= one request. CC/Codex use some opaque quota scheme, where you never really know if a request will be 1,2,10% of your hourly max, let alone weekly max.

I've never seen much difference with context ostensibly being shorter in GHCP, all of the models (in any provider) lose the thread well before their window is full, and it seems that aggressive autocompaction is a pretty standard way to help with that, and CC/Codex do it frequently.

rafaelmn 10 hours ago | parent [-]

>I've never seen much difference with context ostensibly being shorter in GHCP, all of the models (in any provider) lose the thread well before their window is full, and it seems that aggressive autocompaction is a pretty standard way to help with that, and CC/Codex do it frequently.

Then we've had wildly different results. Running CC and GH copilot with Opus 4.6 on same task and the results out of CC were just better, likewise for Codex and GPT 5.4. I have to assume it's the aggressive context compaction/limited context loading because tracking what copilot does it seems to read way less context and then misses out on stuff other agents pick up automatically.