Remix.run Logo
lelanthran 10 hours ago

> The problem is that it's much easier to use the SOTA models (especially if they are subsidized) instead of spending time fixing the knobs with the local one.

That's not a problem, that's a feature; I have something like 8 tabs open to different free-tier providers. ChatGPT, Claude and Gemini are the SOTA ones.

I have no problem maxing one out, then moving to the next. I can do this all day, have them implement specific functions (or classes) in my code. The things is, because I actually know how to write and design software, I don't need to run an agent in a loop to produce everything in a day, I can use the web chatbots with copy/paste to literally generate thousands of lines of code per hour while still having a strong mental model of the code that I can go in and change whatever I need to.[1]

---------------------

[1] Just did that this morning on a Python project: because I designed what I needed, each generation was me prompting for a single function. So when I needed to add something this morning I didn't even bother asking an chatbot to do it, I just went ahead directly to the correct place and did it.

You can't do that if you generate the entire thing from specs.

vb-8448 10 hours ago | parent [-]

We are speaking about local AI, and having all this SOTA models basically for free is blocking the progress of local or independent third party setups.

lelanthran 10 hours ago | parent [-]

Maybe I should have clarified what the feature is (After re-reading my post, I see that I basically just ended after adding the footnote)

The feature of using all these SOTAs to exhaustion on the free tiers is burning their VC money!

The more I use for free, the more of their money I burn, the closer we'll get to actual 3rd-party and independent setups (local or otherwise).