▲ | pshirshov 3 days ago | ||||||||||||||||
My current solution is to pack llama.cpp as a custom nix formula (the one in nixpkgs has the conversion script broken) and run it myself. I wasn't able to run unsloth on ROCM nor for inference nor for conversion, sticking with peft for now but I'll attempt again to re-package it. | |||||||||||||||||
▲ | danielhanchen 3 days ago | parent [-] | ||||||||||||||||
Oh interesting oh for ROCM there are some installation instructions here: https://rocm.docs.amd.com/projects/ai-developer-hub/en/lates... I'm working with the AMD folks to make the process easier, but it looks like first I have to move off from pyproject.toml to setup.py (allows building binaries) | |||||||||||||||||
|