| ▲ | KetoManx64 a day ago |
| Tokens are insanely cheap at the moment.
Through OpenRouter a message to Sonnet costs about $0.001 cents or using Devstral 2512 it's about $0.0001.
An extended coding session/feature expansion will cost me about $5 in credits.
Split up your codebase so you don't have to feed all of it into the LLM at once and it's a very reasonable. |
|
| ▲ | lebovic a day ago | parent | next [-] |
| It cost me ~$750 to find a tricky privilege escalation bug in a complex codebase where I knew the rough specs but didn't have the exploit. There are certainly still many other bugs like that in the codebase, and it would cost $100k-$1MM to explore the rest of the system that deeply with models at or above the capability of Opus 4.6. It's definitely possible to do a basic pass for much less (I do this with autopen.dev), but it is still very expensive to exhaustively find the harder vulnerabilities. |
| |
| ▲ | christophilus 15 hours ago | parent | next [-] | | This is where the Codex and Claude Code Pro/Max plans are excellent. I rarely run into the limits of Codex. If I do, I wait and come back and have it resume once the window has expired. | | |
| ▲ | Jcampuzano2 14 hours ago | parent [-] | | Claude and Codex pro/max subs aren't supposed to be used for commercial/enterprise development so its not really an option for execs in enterprise. They need to take into account API costs. At my F500 company execs are very wary of the costs of most of these tools and its always top of mind. We have dashboards and gather tons of internal metrics on which tools devs are using and how much they are costing. | | |
| ▲ | christophilus 10 hours ago | parent | next [-] | | No, I think that’s wrong. They aren’t supposed to be put behind a service, but they can certainly be used to write professional products/ products for the enterprise. | | | |
| ▲ | otterley 14 hours ago | parent | prev | next [-] | | Are they also measuring productivity? Measuring only token costs is like looking only at grocery spend but not the full receipt: you don’t know whether you fed your family for a week or for only a day. | | |
| ▲ | Jcampuzano2 8 hours ago | parent | next [-] | | I'm not one of those execs, I'm just echoing what they tell us from those I've talked to who manage these dashboards and worry about this. I do think measuring productivity is not very clear-cut especially with these tools. They do "attempt" to measure productivity. But they also just see large dollar amounts on AI costs and get wary. My company is also wary of going all in with any one tool or company due to how quickly stuff changes. So far they've been trying to pool our costs across all tools together and give us an "honor system" limit we should try not to go above per month until we do commit to one suite of tools. | |
| ▲ | batshit_beaver 12 hours ago | parent | prev [-] | | First you have to figure out HOW to measure productivity. | | |
| ▲ | otterley 8 hours ago | parent [-] | | (Output / input), both of which are usually measured in money. If you can measure both of those things--and you have bigger problems if your finance department can't--it logically follows that you can measure productivity. | | |
| ▲ | Jcampuzano2 8 hours ago | parent [-] | | Measuring strictly in terms of money per unit time over a small enough timeframe is difficult because not all tasks directly result in immediately observed results. There are tasks worked on at large enterprises that have 5+ year horizons, and those can't all immediately be tracked in terms of monetary gain that can be correlated with AI usage. We've barely even had AI as a daily tool used for development for a few years. |
|
|
| |
| ▲ | petesergeant 14 hours ago | parent | prev [-] | | > Claude and Codex pro/max subs aren't supposed to be used for commercial/enterprise development lolwut? | | |
|
| |
| ▲ | otterley 14 hours ago | parent | prev | next [-] | | How much would it have cost a human to do the same work? The question isn’t how much tokens cost; the question is how much money is saved by using AI to do it. | | | |
| ▲ | skeledrew 14 hours ago | parent | prev [-] | | Compare to the cost when said vulnerabilities are exploited by bad actors in critical systems. Worth it yet? |
|
|
| ▲ | zozbot234 13 hours ago | parent | prev | next [-] |
| Agentic tasks use up a huge amount of tokens compared to simple chatting. Every elementary interaction the model has with the outside world (even while doing something as simple as reading code from a large codebase) is a separate "chat" message and "response", and these add up very quickly. |
|
| ▲ | gmerc a day ago | parent | prev | next [-] |
| You’d have to ignore the massive investor ROI expectations or somehow have no capability to look past “at the moment”. |
| |
| ▲ | NitpickLawyer 17 hours ago | parent | next [-] | | That might be a problem for the labs (although I don't think it is) but it's not a problem for end-users. There is enough pressure from top labs competing with each other, and even more pressure from open models that should keep prices at a reasonable price point going further. In order to justify higher prices the SotA needs to have way higher capabilities than the competition (hence justifying the price) and at the same time the competition needs to be way below a certain threshold. Once that threshold becomes "good enough for task x", the higher price doesn't make sense anymore. While there is some provider retention today, it will be harder to have once everyone offers kinda sorta the same capabilities. Changing an API provider might even be transparent for most users and they wouldn't care. If you want to have an idea about token prices today you can check the median for serving open models on openrouter or similar platforms. You'll get a "napkin math" estimate for what it costs to serve a model of a certain size today. As long as models don't go oom higher than today's largest models, API pricing seems in line with a modest profit (so it shouldn't be subsidised, and it should drop with tech progress). Another benefit for open models is that once they're released, that capability remains there. The models can't get "worse". | |
| ▲ | KetoManx64 a day ago | parent | prev [-] | | Not really. I'm fully taking advantage of these low prices while they last. Eventually the AI companies will run start running out of funny money and start charging what the models actually cost to run, then I just switch over to using the self hosted models more often and utilize the online ones for the projects that need the extra resources.
Currently there's no reason for why I shouldn't use Claude Sonnet to write one time bash scripts, once it starts costing me a dollar to do so I'm going to change my behavior. | | |
| ▲ | deaux 20 hours ago | parent | next [-] | | > Currently there's no reason for why I shouldn't use Claude Sonnet to write one time bash scripts, once it starts costing me a dollar to do so I'm going to change my behavior. This just isn't going to happen, we have open weights models which we can roughly calculate how much they cost to run that are on the level of Sonnet _right now_. The best open weights models used to be 2 generations behind, then they were 1 generation behind, now they're on par with the mid-tier frontier models. You can choose among many different Kimi K2.5 providers. If you believe that every single one of those is running at 50% subsidies, be my guest. | |
| ▲ | skeledrew 13 hours ago | parent | prev | next [-] | | > start charging what the models actually cost to run The political climate won't allow that to happen. The US will do everything to stay ahead of China, and a rise in prices means a sizeable migration to Chinese models, giving them that much more data to improve their models and pass the US in AI capability (if they haven't already). But also it'll happen in a way, as eventually models will become optimized enough that run cost become more or less negligible from a sustainability perspective. | |
| ▲ | twosdai 21 hours ago | parent | prev [-] | | I also have this feeling. But do you ever doubt it. that when the time comes we will be like the boiled frog? Where its "just so convenient" or that the reality of setting up a local ai is just a worse experience for a large upfront cost? | | |
| ▲ | iririririr 21 hours ago | parent [-] | | worse. he's already boiled. probably paying way more than that one dollar per bash script with all the subscriptions he already has. | | |
| ▲ | KetoManx64 20 hours ago | parent [-] | | Yeah, the $20 I paid to OpenRouter about 4 months ago really cost me an arm and a leg, not sure where I'll get my next meal if I'm to be honest. |
|
|
|
|
|
| ▲ | ThePowerOfFuet 19 hours ago | parent | prev [-] |
| >$0.001 cents $0.001 (1/10 of a cent) or 0.001 cents (1/1000 of a cent, or $0.00001)? |
| |