| ▲ | tailscaler2026 4 hours ago |
| Subsidies don't last forever. |
|
| ▲ | pitched 4 hours ago | parent | next [-] |
| Running an open like Kimi constantly for an entire month will cost around 100-200$, being roughly equal to a pro-tier subscription. This is not my estimate so I’m more than open to hearing refutations. Kimi isn’t at all Opus-level intelligent but the models are roughly evenly sized from the guesses I’ve seen. So I don’t think it’s the infra being subsidized as much as it’s the training. |
| |
| ▲ | nothinkjustai 4 hours ago | parent | next [-] | | Kimi costs 0.3/$1.72 on OpenRouter, $200 for that gives you way more than you would get out of a $200 Claude subscription. There are also various subscription plans you can use to spend even less. | |
| ▲ | varispeed 3 hours ago | parent | prev | next [-] | | How do you get anything sensible out of Kimi? | |
| ▲ | senordevnyc 3 hours ago | parent | prev [-] | | I’m using Composer 2, Cursor’s model they built on top of Kimi, and it’s great. Not Opus level, but I’m finding many things don’t need Opus level. |
|
|
| ▲ | smt88 4 hours ago | parent | prev | next [-] |
| Tell that to oil and defense companies. If tech companies convince Congress that AI is an existential issue (in defense or even just productivity), then these companies will get subsidies forever. |
| |
| ▲ | andai 4 hours ago | parent [-] | | Yeah, USA winning on AI is a national security issue. The bubble is unpoppable. And shafting your customers too hard is bad for business, so I expect only moderate shafting. (Kind of surprised at what I've been seeing lately.) | | |
| ▲ | danny_codes 3 hours ago | parent [-] | | It’s considered national security concern by this administration. Will the next be a clown show like this one? Unclear | | |
| ▲ | smt88 an hour ago | parent [-] | | The administration doesn't decide spending. Congress does. There's no chance we get an anti-AI majority until a major AI catastrophe turns the public against it. |
|
|
|
|
| ▲ | gadflyinyoureye 4 hours ago | parent | prev [-] |
| I've been assuming this for a while. If I have a complex feature, I use Opus 4.6 in copilot to plan (3 units of my monthly limit). Then have Grok or Gemini (.25-.33) of my monthly units to implement and verify the work. 80% of the time it works every time. Leave me plenty of usage over the month. |
| |
| ▲ | sgc 43 minutes ago | parent | next [-] | | I have a very newcomer-type question. What is the output format of your plan such that you can break context and get the other LLM to produce satisfactory results? What level of details is in the plan, bullet points, pseudo-code, or somewhere in the middle? | |
| ▲ | andai 4 hours ago | parent | prev [-] | | Yeah I've been arriving at the same thing. The other models give me way more usage but they don't seem to have enough common sense to be worth using as the main driver. If I can have Claude write up the plan, and the other models actually execute it, I'd get the best of both worlds. (Amusingly, I think Codex tolerates being invoked by Claude (de facto tolerated ToS violation), but not the other way around.) | | |
| ▲ | zozbot234 2 hours ago | parent [-] | | I don't think there's any ToS violation involved? AIUI you can use GPT models with any harness, at least at present. You could nonetheless have Codex write up the plan to an .md file for Claude (perhaps Sonnet or even Haiku?) to execute. |
|
|