Remix.run Logo
TheDong 7 hours ago

The cost of ownership for an OpenClaw, and how many credits you'll use, is really hard to estimate since it depends so wildly on what you do.

I can give you an openclaw instruction that will burn over $20k worth of credits in a matter of hours.

You could also not talk to your claw at all for the entire month, setup no crons / reoccurring activities / webhooks / etc, and get a bill of under $1 for token usage.

My usage of OpenClaw ends up costing on the order of $200/mo in tokens with the claude code max plan (which you're technically not allowed to use with OpenClaw anymore), or over $2000 if I were using API credits I think (which Klause is I believe, based on their FAQ mentioning OpenRouter).

So yeah, what I consider fairly light and normal usage of OpenClaw can quite easily hit $2000/mo, but it's also very possible to hit only $5/mo.

Most of my tokens are eaten up by having it write small pieces of code, and doing a good amount of web browser orchestration. I've had 2 sentence prompts that result in it spinning up subagents to browse and summarize thousands of webpages, which really eats a lot of tokens.

I've also given my OpenClaw access to its own AWS account, and it's capable of spinning up lambdas, ec2 instances, writing to s3, etc, and so it also right now has an AWS bill of around $100/mo (which I only expect to go up).

I haven't given it access to my credit card directly yet, so it hasn't managed to buy gift cards for any of the friendly nigerian princes that email it to chat, but I assume that's only a matter of time.

grim_io 6 hours ago | parent | next [-]

Absolute madman :)

Giving an agent access to AWS is effectively giving it your credit card.

At the max, I would give it ssh access to a Hetzner VM with its own user, capable of running rootles podman containers.

haolez 6 hours ago | parent [-]

Not at all. AWS IAM policy is a complex maze, but incredibly powerful. It solves this exact problem very well.

wiether 4 hours ago | parent [-]

Do you honestly believe that they made the effort of setting the appropriate roles and policies, though?

jimbob45 5 hours ago | parent | prev | next [-]

Would having a locally-hosted model offset any of these costs?

kennywinker 4 hours ago | parent | next [-]

Yes, but that comes at the cost of using a dumber llm. The state of the art ones are only available via commercial api, and the best self-hostable models require $10,000+ gpus.

This is a problem for coding as smarter really has an impact there, but there are so so so many tasks that an 8b model that runs on a $200 gpu can handle nicely. Scrape this page and dump json? Yeah that’s gonna be fine.

This is my conclusion based on a week or so of using ollama + qwen3.5:3b self hosted on a ~10 year old dell optiplex with only the built-in gpu. You don’t need state of the art to do simple tasks.

robthompson2018 3 hours ago | parent | prev [-]

Our starter plan gives you a machine with 2GB of RAM. You will not be able to run a local LLM. OpenRouter has free models (eg Z.ai: GLM 4.5 Air), I recommend those.

giancarlostoro 6 hours ago | parent | prev [-]

Just have to know... What the heck are you building?