| ▲ | flippant 5 hours ago | |
I did exactly this in early 2025 with a small keyword tagging pipeline. You may run into some issues with Docker and native deps once you get to production. Don’t forget to cache the bumblebee files. | ||
| ▲ | dnautics 2 minutes ago | parent [-] | |
No problem. It's an SLM, I have a dedicated on-prem GPU server that I deploy behind tailscale for inference. For training, I reach out to lambdalabs and just get a beefy GPU for a few hours for the cost of a Starbucks coffee. | ||