No problem. It's an SLM, I have a dedicated on-prem GPU server that I deploy behind tailscale for inference. For training, I reach out to lambdalabs and just get a beefy GPU for a few hours for the cost of a Starbucks coffee.