| ▲ | info_sh_com 3 hours ago | |||||||
Im using this right now with an RTX A5000 24 GB VRAM. I am using it for a few .NET projects at work. It is the 1st local LLM implementation I have used that creates usable code | ||||||||
| ▲ | i7l 3 hours ago | parent [-] | |||||||
Looks and sounds interesting... Is there anything beyond glue that makes the Qwen models it uses better for development than what you get with local models through Ollama in an IDE or editor of your choice? | ||||||||
| ||||||||