Remix.run Logo
rfoo 3 days ago

IMO the correct thing to do to make these people happy, while being sane, is - do not build llama.cpp on their system. Instead, bundle a portable llama.cpp binary along with unsloth, so that when they install unsloth with `pip` (or `uv`) they get it.

Some people may prefer using whatever llama.cpp in $PATH, it's okay to support that, though I'd say doing so may lead to more confused noob users spam - they may just have an outdated version lurking in $PATH.

Doing so makes unsloth wheel platform-dependent, if this is too much of a burden, then maybe you can just package llama.cpp binary and have it on PyPI, like how scipy guys maintain a https://pypi.org/project/cmake/ on PyPI (yes, you can `pip install cmake`), and then depends on it (maybe in an optional group, I see you already have a lot due to cuda shit).

danielhanchen 3 days ago | parent [-]

Oh yes I was working on providing binaries together with pip - currently we're relying on pyproject.toml, but once we utilize setup.py (I think), using binaries gets much simpler

I'm still working on it, but sadly I'm not a packaging person so progress has been nearly zero :(

ffsm8 3 days ago | parent | next [-]

I think you misunderstood rfoos suggestion slightly.

From how I interpreted it, he meant you could create a new python package, this would effectively be the binary you need.

In your current package, you could depend on the new one, and through that - pull in the binary.

This would let you easily decouple your package from the binary,too - so it'd be easy to update the binary to latest even without pushing a new version of your original package

I've maintained release pipelines before and handled packaging in a previous job, but I'm not particularly into the python ecosystem, so take this with a grain of salt: an approach would be

Pip Packages :

    * Unsloth: current package, prefers using unsloth-llama, and uses path llama-cpp as fallback (with error msg as final fallback if neither exist, promoting install for unsloth-llama)
    * Unsloth-llama: new package which only bundles the llama cpp binary
danielhanchen 3 days ago | parent [-]

Oh ok sorry maybe I misunderstood sorry! I actually found my partial work I did for precompiled binaries! https://huggingface.co/datasets/unsloth/precompiled_llama_cp...

I was trying to see if I could pre-compile some llama.cpp binaries then save them as a zip file (I'm a noob sorry) - but I definitely need to investigate further on how to do python pip binaries

docfort 3 days ago | parent [-]

https://docs.astral.sh/uv/guides/package/#publishing-your-pa...

danielhanchen 2 days ago | parent [-]

Oh thanks - we currently use Pypi so pip install works - https://pypi.org/project/unsloth/

But I think similarly for uv we need a setup.py for packaging binaries (more complex)

rat9988 3 days ago | parent | prev [-]

Don't worry. Don't let the rednecks screaming here affect you. As for one, I'm happy that you have automated this part and sad to see it is going away. People will always complain. It might be reasonable feedback worth acting upon. Don't let their tone distract you though. Some of them are just angry all day.

danielhanchen 3 days ago | parent [-]

Thanks - hopefully the compromise solution ie python input asking for user permissions works ok?

rpdillon 3 days ago | parent [-]

As a guy that would naturally be in the camp of "installing packages is never okay", I also live in the more practical world where people want things to work. I think the compromise you're suggesting is a pretty good one. I think the highest quality implementation here would be.

Try to find prebuilt and download.

See if you can compile from source if a compiler is installed.

If no compiler: prompt to install via sudo apt and explaining why, also give option to abort and have the user install a compiler themselves.

This isn't perfect, but limits the cases where prompting is necessary.

danielhanchen 3 days ago | parent [-]

I'm going to see if I can make prebuilt versions work :) But thanks!