▲ | rfoo 3 days ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
IMO the correct thing to do to make these people happy, while being sane, is - do not build llama.cpp on their system. Instead, bundle a portable llama.cpp binary along with unsloth, so that when they install unsloth with `pip` (or `uv`) they get it. Some people may prefer using whatever llama.cpp in $PATH, it's okay to support that, though I'd say doing so may lead to more confused noob users spam - they may just have an outdated version lurking in $PATH. Doing so makes unsloth wheel platform-dependent, if this is too much of a burden, then maybe you can just package llama.cpp binary and have it on PyPI, like how scipy guys maintain a https://pypi.org/project/cmake/ on PyPI (yes, you can `pip install cmake`), and then depends on it (maybe in an optional group, I see you already have a lot due to cuda shit). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | danielhanchen 3 days ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Oh yes I was working on providing binaries together with pip - currently we're relying on pyproject.toml, but once we utilize setup.py (I think), using binaries gets much simpler I'm still working on it, but sadly I'm not a packaging person so progress has been nearly zero :( | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|