▲ | drillsteps5 4 days ago | |
If he was building compute device for LLM inference specifically it would help to check in advance what that would entail. Like GPU requirement. Which putting bunch of RPis in the cluster doesn't help one bit. Maybe I'm missing something. |