Thanks for raising it! Since vLLM has an OpenAI-compatible API, this should work for now:
docker run --rm -p 8080:8080 \ -e OPENAI_API_KEY="some-vllm-key-if-needed" \ -e OPENAI_BASE_URL="http://host.docker.internal:11434/v1" \ ... enterpilot/gomodel