|
| ▲ | infamia 2 hours ago | parent | next [-] |
| > I think the fact that they reached 2B/mo in revenue by dogfooding cc is all the proof that one needs that this thing actually works. That's a notable achievement, but let's have some balance... It's also responsible for the biggest self-own in software industry history by leaking their 1) crown jewels (i.e., source code) 2) the existence of their next model Mythos, and 3) their roadmap in a highly competitive market. |
| |
| ▲ | NitpickLawyer 2 hours ago | parent [-] | | Eh... I personally think that having the keypads to enter a DC running on DNS served by that same DC is a bit more self-owning than leaking the source code of an app, but I get your point. It's obviously not perfect, but it's also obviously working. Let's put this in perspective. Imagine it's 3 years ago, April 2023. Chatgpt has been launched for 4 months. We've all been using it, and writing poems in parrot talk or whatever. Someone tells you "In 2 years time there will be an app that lets you use LLMs to write code. It will be coded by humans for 3 weeks, then by humans + LLMs for 6 months, and then by LLMs mostly unsupervised. One year after that, they'll be making 2B/mo out of that app". Would you believe them? Not even the most maximalist, overhypers, AI singularity frenzied crazy people would have said that. And yet... it happened. |
|
|
| ▲ | claw-el 2 hours ago | parent | prev | next [-] |
| Is the reason they reached 2B/mo partially contributed by the fact that their users feel like they get unlimited use of it?
If ‘feeling like it is unlimited use’ is a huge part that creates the 2B/mo, this change of limit might jeopardize it. That being said, Anthropic can be diverting capacity to train the next model, and if it is significantly better, people would start flocking back again. |
|
| ▲ | AstroBen an hour ago | parent | prev | next [-] |
| Not really. A person will eventually drink dirty water if it was the only thing available in a desert. There's very little competition for SOTA models. The models themselves also weren't built by Claude. The current revenue has almost nothing to do with what Claude built. Hell if it was so far ahead then they wouldn't be desperately trying to block OpenCode. |
| |
| ▲ | NitpickLawyer an hour ago | parent [-] | | > The models themselves also weren't built by Claude. The current revenue has almost nothing to do with what Claude built. Ummm, no. Anthropic is #1 in coding because they developed it first. Then they used data + signals to train models specifically to work best with cc. They work together. Why do you think every provider (including chinese ones) have their own harnesses? Having real-world data and usage metrics helps training the models in immense ways. Having features fast in this case >>> having perfect features. Some of them they dropped along the way, but having them in the pair cc + models is what matters. People switched from Cursor to cc in droves because it worked better there. That's not a fluke. That's how you improve your models, by collecting real world data after you launch them. > Hell if it was so far ahead then they wouldn't be desperately trying to block OpenCode. That's a lack of compute problem. |
|
|
| ▲ | MagicMoonlight 2 hours ago | parent | prev [-] |
| Everything works until it doesn’t. The problem with slop is, nobody understands it. Nobody ever designed it, nobody really knows how it works. You’re just putting blind faith in the slop you’ve shipped. It lets you be very quick, but if you’ve accidentally compromised all your data or bank accounts through the slop then you won’t know until you’re destroyed. |