| ▲ | baxtr 6 hours ago |
| Seems like models become commoditized? |
|
| ▲ | verdverm 6 hours ago | parent | next [-] |
| Same for OpenClaw, it will be commodity soon if you don't think it is already |
| |
| ▲ | elxr 5 hours ago | parent | next [-] | | It's definitely not right now. What else has the feature list and docs even resembling it? | | |
| ▲ | Aurornis 3 hours ago | parent | next [-] | | OpenClaw has only been in the news for a few weeks. Why would you assume it’s going to be the only game in town? Early adopters are some of the least sticky users. As soon as something new arrives with claims of better features, better security, or better architecture then the next new thing will become the popular topic. | |
| ▲ | verdverm 3 hours ago | parent | prev [-] | | OpenClaw has mediocre docs, from my perspective on some average over many years using 100s of open source projects. I think Anthropic's docs are better. Best to keep sampling from the buffet than to pick a main course yet, imo. There's also a ton of real experiences being conveyed on social that never make it to docs. I've gotten as much value and insights from those as any documentation site. |
| |
| ▲ | baxtr 6 hours ago | parent | prev [-] | | Not sure. I mean the tech yes definitely. But the community not. | | |
| ▲ | verdverm 6 hours ago | parent [-] | | The community is tiny by any measure (beyond the niche), market penetration is still very very early Anthropic's community, I assume, is much bigger. How hard it is for them to offer something close enough for their users? | | |
| ▲ | filoleg 4 hours ago | parent [-] | | > Anthropic's community, I assume, is much bigger. How hard it is for them to offer something close enough for their users? Not gonna lie, that’s exactly the potential scenario I am personally excited for. Not due to any particular love for Anthropic, but because I expect this type of a tight competition to be very good for trying a lot of fresh new things and the subsequent discovery process of new ideas and what works. | | |
| ▲ | verdverm 3 hours ago | parent [-] | | My main gripe is that it feels more like land grabbing than discovery Stories like this reinforce my bias |
|
|
|
|
|
| ▲ | lez 6 hours ago | parent | prev | next [-] |
| It has already been so with ppq.ai (pay per query dot AI) |
|
| ▲ | cyanydeez 6 hours ago | parent | prev [-] |
| Things that arn't happening any time soon but need to for actual product success built on top: 1. Stable models 2. Stable pre- and post- context management. As long as they keep mothballing old models and their interderminant-indeterminancy changes, whatever you try to build on them today will be rugpulled tomorrow. This is all before even enshittification can happen. |
| |
| ▲ | altcunn 3 hours ago | parent [-] | | This is the underrated risk that nobody talks about enough. We've already seen it play out with the Codex deprecation, the GPT-4 behavior drift saga, and every time Anthropic bumps a model version. The practical workaround most teams land on is treating the model as a swappable component behind a thick abstraction layer. Pin to a specific model version, run evals on every new release, and only upgrade when your test suite passes. But that's expensive engineering overhead that shouldn't be necessary. What's missing is something like semantic versioning for model behavior. If a provider could guarantee "this model will produce outputs within X similarity threshold of the previous version for your use case," you could actually build with confidence. Instead we get "we improved the model" and your carefully tuned prompts break in ways you discover from user complaints three days later. |
|