| ▲ | jdefr89 9 hours ago |
| Over reliance on LLMs is going to become such a disaster in a way no one would have thought possible. Not sure exactly what, who, when, or where.. Just that having your entire product or repo dependent on a single entity is going to lead to some bad times… |
|
| ▲ | xnx 9 hours ago | parent | next [-] |
| > on a single entity Contrary to the popular opinion here, there are other services beyond Claude Code. These usage limits might even prompt (har har) people to notice that Gemini is cheaper and often better. |
| |
| ▲ | bigbinary 9 hours ago | parent | next [-] | | On-premise LLMs are also getting better and likely won’t stop; as costs go up with the technical improvements, I would imagine cost saving methods to also improve | | |
| ▲ | horsawlarway 9 hours ago | parent [-] | | I still think it's basically unavoidable that most people who might pay for api access will end up on-prem. Fixed costs, exact model pinning, outage resistant, enshittification resistant, better security, better privacy, etc... There are just so many compelling reasons to be on-prem instead of dependent on a 3rd party hoovering up all your data and prompts and selling you overpriced tokens (which eventually they MUST be, because these companies have to make a profit at some point). If the only counterbalance is "well the api is cheaper than buying my own hardware"... That's a short term problem. Hardware costs are going to drop over time, and capabilities are going to continue improving. It's already pretty insane how good of a model I can run on two old RTX-3090s locally. Is it as good as modern claude? No. Is it as good as claude was 18 months ago? Yes. Give it a decade to see companies really push into the "diminishing returns" of scaling and new models... combined with new hardware built with these workloads in mind... and I think on-prem is the pretty clear winner. | | |
| ▲ | bigbinary 8 hours ago | parent [-] | | These big players don’t have as big of a moat as they like to advertise, but as long as VC wants to subsidize my agents, I’ll keep paying for the $20 plan until they inevitably cut it off |
|
| |
| ▲ | kakugawa 8 hours ago | parent | prev | next [-] | | gemini-cli has not been useable for weeks. The API endpoint it uses for subscription users is so heavily rate-limited that the CLI is non-functional. There are many reports of this issue on Github. [1] 1/ https://github.com/google-gemini/gemini-cli/issues?q=is%3Ais... | | |
| ▲ | tasuki 7 hours ago | parent [-] | | I use Gemini-CLI at work, and haven't noticed anything. I use Google Jules (free tier) on a toy project much more heavily and can't complain. I think sometimes the prompts take longer than they used to, but I couldn't care less. I'm not in a hurry. |
| |
| ▲ | solarkraft 7 hours ago | parent | prev | next [-] | | Gemini better? What are y’all doing that it doesn’t crash and burn within the first minute of using it? It might be acceptable for some general tasks, but I haven’t EVER seen it perform well on non trivial programming tasks. | |
| ▲ | earlyriser 9 hours ago | parent | prev | next [-] | | Gemini is not better on the quotas: https://discuss.ai.google.dev/t/quota-limit-for-pro-plan/130... | |
| ▲ | ikidd 8 hours ago | parent | prev [-] | | Last time I used Gemini I watched it burn tokens at three times the rate of any other models arguing with itself and it rarely produced a result. This was around Christmas or shortly after. Has that BS stopped? | | |
| ▲ | DefineOutside 8 hours ago | parent | next [-] | | It's still not uncommon for it to escape it's thinking block accidentally and be unable to end it's response, or for it to call the same tool repeatedly. I've watched it burn 50 million tokens in a loop before killing the chat. | |
| ▲ | kaycey2022 8 hours ago | parent | prev [-] | | No. It's still shit. It can do some well contained tasks, but it is very less usable on production codebases than gpt or claude models. Mainly because of the usage limits and the lack of good environments for us to use it on. Anthropic gets away with this because claude code, as bad as it is, is still quite functional. Gemini cli and antigravity are utter trash in comparison. |
|
|
|
| ▲ | jorvi 9 hours ago | parent | prev | next [-] |
| For a second I hoped you were gonna comment on how LLMs are going to rot out our skillset and our brains. Like some people already complaining they "have to think" when ChatGPT or Claude or Grok is down. Oh well. |
| |
| ▲ | Retr0id 9 hours ago | parent | next [-] | | The other day I was doing some programming without an LSP, and I felt lost without it. I was very familiar with the APIs I was using, but I couldn't remember the method names off the top of my head, so I had to reference docs extensively. I am reliant on LSP-powered tab completions to be productive, and my "memorizing API methods" skill has atrophied. But I'm not worried about this having some kind of impact on my brain health because not having to memorize API methods leaves more room for other things. It's possible some people offload too much to LLMs but personally, my brain is still doing a lot of work even when I'm "vibecoding". | | |
| ▲ | akdev1l 9 hours ago | parent [-] | | Ironically this is one of my main use cases for LLMs “Can you give me an example of how to read a video file using the Win32 API like it’s 2004?” - me trying to diagnose a windows game crashing under wine | | |
| ▲ | seanw444 4 hours ago | parent [-] | | Exactly. I feel this is the strongest use case. I can get personalized digests of documentation for exactly what I'm building. On the other hand, there's people that generate tokens to feed into a token generator that generates tokens which feeds its tokens to two other token generators which both use the tokens to generate two different categories of tokens for different tasks so that their tokens can be used by a "manager" token generator which generates tokens to... And so on. It's all so absurd. |
|
| |
| ▲ | ahsillyme 9 hours ago | parent | prev | next [-] | | I read that as implied. | |
| ▲ | toss1 9 hours ago | parent | prev | next [-] | | Unsurprising people complain. "Thinking is the hardest work there is, which is why so few people do it" — attrib Henry Ford Now we have tools that can appear to automate your thinking for you. (They don't really think, but they do appear to, so...) | | |
| ▲ | jakobloekke 9 hours ago | parent [-] | | “Thinking is to humans as swimming is to cats. They can do it, but they prefer not to.”
- Kahneman |
| |
| ▲ | bitwize 9 hours ago | parent | prev [-] | | AI will totally rot our brains, just like television, video games, and the internet all did before. | | |
|
|
| ▲ | dewey 9 hours ago | parent | prev | next [-] |
| There's so many different models, from hosted to local and there's almost no switching cost as most of them are even api compatible or supported by one of the gateways (Bifrost, LiteLLM,...). There's many things to worry about but which LLM provider you choose doesn't really lock you in right now. |
|
| ▲ | wutwutwat 9 hours ago | parent | prev | next [-] |
| So, like, GitHub then? |
| |
|
| ▲ | adolph 9 hours ago | parent | prev | next [-] |
| I don't get this pov, maybe b/c I'm not a heavy Claude Code user, just a dabbler. Any LLM tool that can selectively use part of a code base as part of the input prompt will be useful as an augmentation tool. Note the word "any." Like cloud services there will be unique aspects of a tool, but just like cloud svc there is a shared basic value proposition allows for migration from one to another and competition among them. If Gemini or OpenAI or Ollama running locally becomes a better choice, I'll switch without a care. Subscription sprawl is likely the more pressing issue (just remembered I should stop my GH CoPilot subscription since switching to Claude). |
|
| ▲ | classified 5 hours ago | parent | prev | next [-] |
| It should be abundantly clear that depending on a single entity will screw you royally, but obviously we don't learn from the mistakes of others. We are condemned to repeat history because we don't know it. |
|
| ▲ | dude250711 9 hours ago | parent | prev | next [-] |
| How can automatic slop-prevention be a disaster? It's a feature. |
|
| ▲ | nickphx 8 hours ago | parent | prev [-] |
| if you rely on the black box of bullshit... you deserve your own fate. |