Running ollama to compute inference uses energy that wouldn't have been used if you weren't running ollama. There's no free lunch here.