Remix.run Logo
gWPVhyxPHqvk 20 hours ago

It's massively cheaper. Copilot charges per request, which with some clever prompting, can lead to huge amounts of work being done at fractions of the cost of Claude Code. Millions of tokens for mere pennies. MS must be taking a huge hit somewhere, because I'm probably getting 10-20x my value out of GH relative to CC.

I am not locked in to Anthropic, either. I can easily switch between GPT and Gemini models based on how I think each would perform in various scenarios. That's a big win. I use a lot of design with Opus, implement with GPT 5.4.

Also, Github Copilot CLI is pretty much at feature parity (for the stuff that matters) with Claude Code. Using both at work and home, I don't think there's much difference in features between the two. Maybe I'm not a super power user, and just a regular dumb user, but GH doesn't seem buggy and everything I think I'd want to do with CC I can do with GH.

amy1173 17 hours ago | parent | next [-]

I'm spending a literal fortune on CC - we also have GH Copilot but the devs imply that CC is better? Will the Github Copilot let us access skills and agent frameworks in CC?

mynameisvlad 14 hours ago | parent | next [-]

Devs say a lot of uninformed things. With a heavy predisposition to hating the "legacy" monoliths that are Microsoft and by association GitHub.

Yes, Copilot supports skills. Practically all agents support very similar feature sets or are actively building up parity support if not already there. The only real difference between systems is the prompt and payment method. Copilot even allows you to use Anthropic's own skills repository: https://github.com/anthropics/skills

https://docs.github.com/en/copilot/concepts/agents/about-age... details the support for skills. https://docs.github.com/en/copilot/concepts/agents/copilot-c... details the CLI tool in general, which seems more or less on par with Claude Code's.

walthamstow 12 hours ago | parent | next [-]

It's a bit rich to go around calling people uninformed because they prefer one harness to another, particularly when you are recommending GHC as comparable to CC.

nfg 12 hours ago | parent [-]

Have you used the gh copilot cli? What would stand out most to you as gaps right now?

walthamstow 12 hours ago | parent [-]

IME is is less capable of performing complex work, more frequently goes down blind alleys and needs correcting, that kind of thing. It's night and day vs CC.

ValentineC 4 hours ago | parent | next [-]

That's probably because the 200k context window means that it'll end up compacting things sooner.

I've just had a chat with Copilot's Opus 4.6 go off the rails after compaction today.

nfg 11 hours ago | parent | prev [-]

And this has been comparing like for like with CC - say Opus 4.6 on the same reasoning effort? Hasn’t been my experience particularly but fair enough. I do tend to use them in different situations (CC outside of work).

walthamstow 11 hours ago | parent [-]

Even if it is close, maybe GHC CLI has improved in the last month since I last used it, I know you didn't say it but calling people uninformed because they prefer one or the other is just wrong.

nfg 11 hours ago | parent [-]

I’d agree, though maybe there’s a more charitable reading of the OP - “uninformed” is one of those accusations that it’s rarely very polite or fair to level against an individual but sometimes is reasonable against a group based on observation. My experience would be that it’s true that “devs says lots of uninformed things” - and I’d include myself in that. It’s been my experience that it’s particularly tough in this space at this time because:

1. Tooling is changing very fast but people tend to form sticky opinions (reasonably enough - there’s only so much time in the world).

2. It’s just hard to form robust objective opinions - you have to make a real effort to build test cases and evaluation processes and generally the barrier to entry there is pretty high.

So - I agree, calling people uninformed is not a great way to win them over, but maybe that’s the price of living in a world of anecdotes which become fixed in people’s minds.

12 hours ago | parent | prev [-]
[deleted]
ValentineC 4 hours ago | parent | prev | next [-]

Claude (and most other models) in GitHub Copilot still only have 200k context, with a hefty amount being reserved for some reason. It's 1M at many other providers.

13 hours ago | parent | prev | next [-]
[deleted]
12 hours ago | parent | prev [-]
[deleted]
maille 19 hours ago | parent | prev [-]

How can I learn that clever prompting?

esafak 19 hours ago | parent [-]

Try to pack as much clear work into your prompt as you can so you don't go back and forth.

chatmasta 18 hours ago | parent [-]

Do hacks like “read prompt.md, and follow its instructions. When you’re done, read it again and follow its instructions.” And then you have some background process appending to the file to keep it warm and you just keep writing there?

cylemons 8 hours ago | parent | next [-]

There is a limit on how much copilot can do in one request, pretty generous but after some time vscode will say "this request is taking very long, do you want to continue" and that would count as a seperate request

ValentineC 4 hours ago | parent [-]

> but after some time vscode will say "this request is taking very long, do you want to continue" and that would count as a seperate request

I don't think that's true. In VS Code, that's also configurable via the chat.agent.maxRequests setting.

There was absurd latency in the Copilot Opus 4.6 model on 1st and 2nd April which led to lots of my requests timing out with nothing to show though.

17 hours ago | parent | prev | next [-]
[deleted]
esafak 16 hours ago | parent | prev [-]

You could do that. I was just trying to say that if you make your original prompt complete enough, and you have well-defined success criteria, you can tell it to keep going until they are met.

Cerium 15 hours ago | parent [-]

Agreed - my experience mirrors this.

> "Fix the following compile errors" -> one shot try and stops.

> "Fix the following compile errors. When done, test your work and continue iterating until build passes without error" -> same cost but it gets the job done.