Remix.run Logo
zozbot234 3 hours ago

For something like OpenClaw you realistically only need rather slow inference, so use SSD offload as described by adrian_b here: https://news.ycombinator.com/item?id=47832249 Though I'm not sure that the support in the main inference frameworks (and even in the GGUF format itself, at least arguably) is up to the task just yet.