Fair. I have high hope for local inference, feel like right now it is simply cost prohibitive to get the hardware. It will be interesting to see what happens.