| ▲ | atwrk 10 hours ago | |
"run" as in run locally? There's not much you can do with that little RAM. If remote models are ok you could have a look at MiniMax M2.1 (minimax.io) or GLM from z.ai or Qwen3 Coder. You should be able to use all of these with your local openai app. | ||