Remix.run Logo
tekacs 5 days ago

I've run into this a ton of times and these websites all kinda suck. Someone mentioned the OpenRouter /models endpoint in a sibling comment here, so I quickly threw this together just now. Please feel free to PR!

https://github.com/tekacs/llm-pricing

  llm-pricing

  Model                                     | Input | Output | Cache Read | Cache Write
  ------------------------------------------+-------+--------+------------+------------
  anthropic/claude-opus-4                   | 15.00 | 75.00  | 1.50       | 18.75      
  anthropic/claude-sonnet-4                 | 3.00  | 15.00  | 0.30       | 3.75       
  google/gemini-2.5-pro                     | 1.25  | 10.00  | N/A        | N/A        
  x-ai/grok-4                               | 3.00  | 15.00  | 0.75       | N/A        
  openai/gpt-4o                             | 2.50  | 10.00  | N/A        | N/A        
  ...
---

  llm-pricing calc 10000 200 -c 9500 opus-4 4.1

  Cost calculation: 10000 input + 200 output (9500 cached, 5m TTL)
  
  Model                      | Input     | Output    | Cache Read | Cache Write | Total    
  ---------------------------+-----------+-----------+------------+-------------+----------
  anthropic/claude-opus-4    | $0.007500 | $0.015000 | $0.014250  | $0.178125   | $0.214875
  openai/gpt-4.1             | $0.001000 | $0.001600 | $0.004750  | $0.000000   | $0.007350
  openai/gpt-4.1-mini        | $0.000200 | $0.000320 | $0.000950  | $0.000000   | $0.001470
  openai/gpt-4.1-nano        | $0.000050 | $0.000080 | $0.000237  | $0.000000   | $0.000367
  thudm/glm-4.1v-9b-thinking | $0.000018 | $0.000028 | $0.000333  | $0.000000   | $0.000378
---

  llm-pricing opus-4 -v

  === ANTHROPIC ===

  Model: anthropic/claude-opus-4
    Name: Anthropic: Claude Opus 4
    Description: Claude Opus 4 is benchmarked as the world's best coding model, at time of release, 
    bringing sustained performance on complex, long-running tasks and agent workflows. It sets new 
    benchmarks in software engineering, achieving leading results on SWE-bench (72.5%) and 
    Terminal-bench (43.2%).
    Pricing:
      Input: $15.00 per 1M tokens
      Output: $75.00 per 1M tokens
      Cache Read: $1.50 per 1M tokens
      Cache Write: $18.75 per 1M tokens
      Per Request: $0
      Image: $0.024
    Context Length: 200000 tokens
    Modality: text+image->text
    Tokenizer: Claude
    Max Completion Tokens: 32000
    Moderated: true
tekacs 4 days ago | parent [-]

Cache pricing tweaked & fixed since the above.

jjani 4 days ago | parent [-]

This looks cool, would like to try it out but cargo version is old (doesn't match readme) and it's not clear what platforms the binaries support.

tekacs 3 days ago | parent | next [-]

Pushed a new version to cargo, for now. Github Releases is giving me headaches. >.<

tekacs 3 days ago | parent | prev [-]

Will fix!