Remix.run Logo
info_sh_com 3 hours ago

Im using this right now with an RTX A5000 24 GB VRAM. I am using it for a few .NET projects at work. It is the 1st local LLM implementation I have used that creates usable code

i7l 3 hours ago | parent [-]

Looks and sounds interesting... Is there anything beyond glue that makes the Qwen models it uses better for development than what you get with local models through Ollama in an IDE or editor of your choice?

startuphakk 2 hours ago | parent [-]

There are tweaks at each layer that we have engineered. But it is a full, OSS agent with subagents - so you control every layer of the stack. Plus it provides a free dual-box setup where you can leave the inference at home and use the agent remote anywhere, which is our custom setup and very very handy.