| ▲ | _pdp_ 7 hours ago | |||||||
The solution as usual is open source. For example... We recently moved a very expensive sonnet 4.6 agent to step-3.5-flash and it works surprising well. Obviously step-3.5-flash is nowhere near the raw performance of sonnet but step works perfectly fine for this case. Another personal observation is that we are most likely going to see a lot of micro coding agent architectures everywhere. We have several such cases. GPT and Claude are not needed if you focus the agent to work on specific parts of the code. I wrote something about this here: https://chatbotkit.com/reflections/the-rise-of-micro-coding-... | ||||||||
| ▲ | stavros 7 hours ago | parent | next [-] | |||||||
> The solution as usual is open source. > Obviously step-3.5-flash is nowhere near the raw performance of sonnet I feel like these two statements conflict with each other. | ||||||||
| ||||||||
| ▲ | snarkyturtle 6 hours ago | parent | prev | next [-] | |||||||
Google releasing Gemma 4 yesterday was prescient. Toying around with Zed + Gemma 4 on my laptop is 95% as good as using a cloud provider. | ||||||||
| ▲ | nothinkjustai 6 hours ago | parent | prev [-] | |||||||
Yeah this is similar to my approach, although with slightly more powerful models. I’m just not having a good time letting the sota models loose on a code base to implement entire features. Spending too much time cleaning up the mess. It’s my fault, I needed to guide it more, but it would take the same amount of time to use a faster model to generate smaller chunks and also cost less. And I’m not even doing anything particularly complex! inb4 skill issue I could probably beat you coding by hand with you using Claude code | ||||||||