llama.cpp has support for running some of or all of the layers on the CPU. It does not swap them into the GPU as needed.