|
| ▲ | rohansood15 7 hours ago | parent | next [-] |
| How do you define your productivity? Are you astronomically richer and/or freer now that you're so much more productive? |
| |
| ▲ | mlsu 6 hours ago | parent | next [-] | | Why, lines of code, of course! As to how those lines of code translate to customer value, well, I'm not quite sure what the code does. And in any case, I've been talking more to my fleet of agents than to customers these days. I'm sure the value will fall right out of this tree if I just shake harder, eh? | | |
| ▲ | wiseowise 5 hours ago | parent [-] | | Infinite monkeys with typewriter theory, you’re onto something. Keep grinding (and paying for Claude, better multiple $200 subscriptions), king. I’m sure the success is around the corner, surely casino loses this time. |
| |
| ▲ | falcor84 7 hours ago | parent | prev [-] | | No, not yet astronomically richer. I'm working on it, but a part of the reason why I haven't yet broken all my bones from repeatedly diving into a pool of money is The Red Queen's Race. With how much easier it is to write code and realize your vision, coupled with how jaded we've all become, the bar is just much higher. But I'm pretty certain that if I had this sort of capability even just 3 years ago, and others didn't, I would have been like a Kryptonian under a yellow sun. | | |
| ▲ | applfanboysbgon 6 hours ago | parent | next [-] | | The bar is on the floor. Not that I can objectively prove it, but it is my strong belief software quality has gotten worse since LLMs started being mandated in enterprises, eg. Windows has began shipping critical issues in updates more often. The vibe motherships themselves certainly don't inspire confidence. ChatGPT for Desktop (which is simply the chat interface in an electron window) doesn't have tabs and yet in an hour of chatting was at the point where it was consuming 2.5gb of memory. In a single tab, remember, because providing tabs is an impossible feat that no human or robot could possibly think to provide -- who would possibly want to ask questions about two different subjects, anyways? | | |
| ▲ | wiseowise 5 hours ago | parent [-] | | > ChatGPT for Desktop (which is simply the chat interface in an electron window) doesn't have tabs and yet in an hour of chatting was at the point where it was consuming 2.5gb of memory. In a single tab, remember, because providing tabs is an impossible feat that no human or robot could possibly think to provide -- who would possibly want to ask questions about two different subjects, anyways? Don’t worry, they maintain feature parity between desktop and web. It routinely consumes 2GB in my browser for some reason. |
| |
| ▲ | diatone 5 hours ago | parent | prev | next [-] | | So if the benefits haven’t accrued to you, it must have gone to your customers right? | |
| ▲ | wiseowise 5 hours ago | parent | prev [-] | | > 3 years ago, and others didn't, I would have been like a Kryptonian under a yellow sun. And what exactly would’ve changed three years ago compared to now? |
|
|
|
| ▲ | unshavedyak 7 hours ago | parent | prev | next [-] |
| $2k/m[1] is not something i could stomache for the quality i get from Claude Code, personally. I'm curious what your base number is for your 10x figure. [1]: 10x my $200/m bill |
| |
| ▲ | sillysaurusx 6 hours ago | parent [-] | | Do you come anywhere close to the limits for Claude at $200? I spent $100 for one month and I only managed to almost fill the context window once. (Opus.) And I was doing a lot of coding. I guess it’s a price tier for agent farming? Bunch of agents in parallel? |
|
|
| ▲ | wiseowise 5 hours ago | parent | prev | next [-] |
| > If there wasn't this fierce competition, and I had to pay 10 times as much, I still gladly would. Just pay the excess to me and let’s pretend it costs 10x more then. |
|
| ▲ | yfw 7 hours ago | parent | prev | next [-] |
| Great so how many of you are there to keep these cash incinerators afloat? |
|
| ▲ | mystifyingpoi 5 hours ago | parent | prev | next [-] |
| > and I had to pay 10 times as much, I still gladly would That narration will make it become the reality at some point. Stop it please. |
|
| ▲ | 7 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | applfanboysbgon 7 hours ago | parent | prev [-] |
| Setting aside my personal grievances with their vibe-coded slop products surrounding the model, the problem for Anthropic is that they do need to charge 10 times as much for model access, but can't because DeepSeek exists and can actually be sustainably served at $20/mo. LLMs are certainly here to stay, for better or worse, but the people going hundreds of billions of dollars into debt perhaps not so much. (Unless the US govt decides it's worth propping them up for access to a billion people's conversations and ability to influence them, which I do believe is a plausible outcome, but would not necessarily make for a riveting tale of capitalist competition) |
| |
| ▲ | tomnipotent 6 hours ago | parent [-] | | > can actually be sustainably served at $20/mo Excepts it comes with a terrible experience that's not sustainable for any serious day-to-day work that doesn't involve constant coffee breaks to wait for some tokens to get generated. No thanks. They don't have to live up to the hype to be useful tools, and for something that costs me annually what I make in a day I'm perfectly happy with the value I'm getting of out of it all (even if someone else is subsidizing it... for now). > going hundreds of billions of dollars into debt This forum exists exactly because of these companies. | | |
| ▲ | applfanboysbgon 6 hours ago | parent | next [-] | | > Excepts it comes with a terrible experience that's not sustainable for any serious day-to-day work that doesn't involve constant coffee breaks to wait for some tokens to get generated. I think you may have misinterpreted what I was saying to be a reference to local models? I am not talking about local. You cannot run DeepSeek on consumer hardware, despite a bunch of people conflating "some 30b model trained on DeepSeek outputs == DeepSeek". But businesses can purchase fleets of GPUs capable of serving DeepSeek for an investment measured in millions rather than billions, and offer something 85% as good as Claude to customers while actually profiting on inference with a $20 subscription, without the massive overhead of training frontier models from scratch. > (even if someone else is subsidizing it... for now) That they are giving away something they cannot sustain is the literal entire point of my comment. | |
| ▲ | wiseowise 5 hours ago | parent | prev [-] | | > This forum exists exactly because of these companies. What’s that even supposed to mean? |
|
|