| ▲ | rahulyc 6 hours ago |
| All the websites currently blocking Claude Code or other AI agents are fighting a losing battle. Computer-use is in the early stages, and the thing preventing mass-adoption seems to be the number of tokens it takes. Agents can fumble around trying 10 CLI commands that don't work before finding the right one and we barely notice. But other visual agents (browser use / computer use etc) end up eventually fumbling on to the right thing, but we don't have the patience to wait 20 mins. to click a button. As tokens get cheaper + faster, we probably get the models that can use a UI interface just as natively as a CLI. |
|
| ▲ | boringg 6 hours ago | parent | next [-] |
| Tokens cheaper? I don't think that seems to be the case ... VC funded tokens were there to build user base and token price will go up as they eventually switch from growth to profitability. |
| |
| ▲ | Aurornis 5 hours ago | parent | next [-] | | I wish I could place a lot of money on the opposite side of this bet. I don't think many realize how could the cheap, alternative models are becoming. I prefer SOTA models for key work, but I can also spend 10X as many tokens on an open model hosted by a non-VC subsidized provider (who is selling at a profit) for tasks that can tolerate slightly less quality. The situation is only getting better as models improve and data centers get built out. | | |
| ▲ | caughtinthought 5 hours ago | parent | next [-] | | What open source model and what non-subsidized provider specifically? | | |
| ▲ | nijave an hour ago | parent [-] | | GLM 4.7 Flash is 0.07/1m tokens in, 0.40/1m tokens out on AWS Bedrock us-east-1. That's less than 1/10 the price of Haiku 4.5 Bedrock isn't the cheapest either although I'm fairly sure they aren't being VC subsidized There are definitely cheap tokens out there. The big gotcha is "for tasks that can tolerate slightly less quality" |
| |
| ▲ | EduardoBautista 4 hours ago | parent | prev | next [-] | | Yes, but how cheap is it to run four at the same time? It’s tough to run one good model locally, but running four at the same time which I commonly do with Claude and Codex just doesn’t seem to be happening anytime soon. | | |
| ▲ | Aurornis 3 hours ago | parent [-] | | I'm referring to hosted models such as via OpenRouter or from the model providers' own services. I think everyone making claims that inference is getting more expensive are unaware that there are more LLM providers than Google, Anthropic, and OpenAI. |
| |
| ▲ | boringg 5 hours ago | parent | prev [-] | | Fair - there are bets both ways though I wouldn't consider it to be a certainty. That revenue drive on this AI build out is going to be real and multifold. |
| |
| ▲ | bheadmaster 5 hours ago | parent | prev [-] | | It will take a few years until scheduled data center construction finishes, and together with software optimizations that may come up in the meantime, it may cause a significant decrease in token price. |
|
|
| ▲ | faangguyindia an hour ago | parent | prev | next [-] |
| nobody can block actual LLM providers, they use spoofed requests to scan web for content, sometimes even using residential proxies. |
| |
| ▲ | nijave an hour ago | parent [-] | | Sure they can, proof of work seems to be effective. Anubis has become pretty popular |
|
|
| ▲ | johnsmith1840 5 hours ago | parent | prev | next [-] |
| And the lethal trifecta but I suppose that's all agents as of now anyhow. Every AI provider has major warnings about letting AI have access to PII in the browser. |
|
| ▲ | ls612 3 hours ago | parent | prev | next [-] |
| They don’t need to be 100% effective they just need to make you afraid enough of being banned to not bother trying. |
|
| ▲ | einpoklum 5 hours ago | parent | prev [-] |
| > the thing preventing mass-adoption seems to be the number of tokens it takes. Try the exhorbitant expenses and ballooning waste of generated electricity and usable water. |