Remix.run Logo
lukewarm707 8 hours ago

please tell me if i'm crazy.

i just refuse to use openai/google/anthropic subscriptions, i only use open source models with ZDR tokens.

- i like privacy in my work, and i share when i wish. somehow we accepted that our prompts and work may be read and moderated by employees. would you accept people moderating what you write in excel, google docs, apple pages?

- i want a consistent tool, not something that is quantised one day, slow one day, a different harness one day, stops randomly.

- unless i am missing something, the closed source models are too slow for me to watch what they are doing. i feel comfortable with monitoring something, usually at about 200-300tps on GLM 5. above that it might even be too fast!

muskstinks 8 hours ago | parent | next [-]

Its a question of price, quality and other factors.

If my company pays for it, i do not care.

If i have a hobby project were it is about converting an idea in my spare time in what i want, i'm happily paying 20$. I just did something like this on the weekend over a few hours. I really enjoy having small tools based on single html page with javascript and json as a data store (i ask it to also add an import/export feature so i can literaly edit it in the app and then save it and commit it).

For the main agent i'm waiting for like the one which will read my emails and will have access tos ystems? I would love a local setup but just buying some hardware today costs still a grant and a lot of energy. Its still sign cheaper to just use a subscription.

Not sure what you mean though regarding speed, they are super fast. I do not have a setup at home which can run 200-300 tps.

lukewarm707 7 hours ago | parent [-]

i don't use local models, i just use the APIs of cloud providers (eg fireworks, together, friendli, novita, even cerebras or groq).

you can get subscriptions to use the APIs, from synthetic, or ollama, fireworks.

johntash 7 minutes ago | parent | next [-]

I might be missing it, but does fireworks actually have a subscription? All I saw was serverless (per token) and gpu $/hr.

And since I saw a few other comments talking about these, do you have any preference on different cloud providers with ZDR? I look every once in a while and want to switch to completely open models and/or at least ZDR so I can start doing things like summarizing e-mail. I'm thinking I can probably split my use between some sort of cloud api and claude code for heavier tasks.

muskstinks 7 hours ago | parent | prev [-]

Whats the big difference then? You can get a lot of tokens for 20$ and not everything is a state secret i'm doing.

But if i would use some API stuff, probably openrouter, isn't that easer to switch around and also have zero konwledge savety?

lukewarm707 7 hours ago | parent [-]

i think that privacy is good for wellbeing. it may be this is a dying point of view.

muskstinks 7 hours ago | parent [-]

It is for sure but running your own email is so time intense that i gave that up 10 years ago.

i then decided to trust one company with most stuff.

Also as I said, I would use something different for my personal stuff. But i'm waiting for the right hardware etc.

susupro1 8 hours ago | parent | prev [-]

You are not crazy, you are just waking up from the SaaS delusion. We somehow allowed the industry to convince us that paying $20/month to rent volatile compute, have our proprietary workflows surveilled, and get throttled mid-thought is an 'upgrade'. The pendulum is swinging violently back to local-native tools. Deterministic, privately owned, unmetered—buying your execution layer instead of renting it is the only way to build actual leverage.

muskstinks 7 hours ago | parent | next [-]

I'm quite aware of my dependency and i'm balancing this in and out regularly over the last 10 years.

Owning is expensive. Not owning is also expensive.

Energy in germany is at 35 cent/kwh and skyrocketed to 60 when we had the russian problem.

I'm planning to buy a farm and add cheap energy but this investment will still take a little bit of time. Until then, space is sparse.

lukewarm707 7 hours ago | parent | prev | next [-]

i don't use local llms. it's mostly the closed source subscriptions that are not private, it really is a choice.

there are many cloud providers of zero data retention llm APIs, and even cryptographic attestation.

they are not throttled, you can get an agreed rate limit.

l72 3 hours ago | parent [-]

Would you mind naming some of your favorite providers?

staticassertion 8 hours ago | parent | prev | next [-]

No one was convinced to spend money to do the things you're saying. That's just disingenuous. People rent models because (a) it moves compute elsewhere (b) they provide higher quality models.

nprateem 7 hours ago | parent [-]

c) It's turnkey instead of requiring months/years of custom dev and on-going maintenance.

NoMoreNicksLeft 8 hours ago | parent | prev [-]

If I could buy this to run it locally, what's that hardware even look like? What model would I even run on the hardware? What framework would I need to have it do the things Claude Code can do?