Remix.run Logo
drake_112 5 days ago

Kagi says they will charge the actual token costs of the underlying API's. They will hopefully make the actual calculation visible soon.

If we make a quick back of the envelope estimation: in the outlier case if those 3 million tokens are mostly output tokens and you always used an advanced model like GPT 4.1 which costs $8 per 1 million output tokens you would be close to hitting the limit that the plan provides ($24 out of $25 worth of tokens).

In pretty much most other scenario's (including a higher proportion of input vs output tokens and mixing in cheaper models) you could be a long way from hitting the limit. For example if you used half of those tokens on GPT 4.1 Mini instead of GPT 4.1 you'd only be roughly halfway to your limit ($14 out of 25$ worth of tokens).

freezingDaniel 5 days ago | parent [-]

I wish they had first included/added a usage meter then the limit. If they had first given users a way to see their usage, users of the assisstent could know how much they have to worry about the change. As it stands I use up to 2m tokens per month but have no clue how much this amount of tokens (over various models) cost. And 5% hitting the limit and not having a way to just pay for usage past the limit (yet) is kind of dauntingI wish they had first added a usage meter before implementing the limit. If they had first given users a way to monitor their usage, users of the assistant could know how much they have to worry about the change.

As it stands, I use up to 2M tokens per month but have no clue how much this amount of tokens (across various models) costs.

And 5% hitting the limit and not having a way to pay for usage past the limit (yet) is kind of scary. Especially as I feel like I use AI more than my peers.