▲ | danielhanchen 3 days ago | |||||||
I would like to be in (1) but I'm not a packaging person so I'll need to investigate more :( (2) I might make the message on installing llama.cpp maybe more informative - ie instead of re-directing people to the docs on manual compilation ie https://docs.unsloth.ai/basics/troubleshooting-and-faqs#how-..., I might actually print out a longer message in the Python cell entirely Yes we're working on Docker! https://hub.docker.com/r/unsloth/unsloth | ||||||||
▲ | lucideer 3 days ago | parent [-] | |||||||
> Yes we're working on Docker! That will be nice too, though I was more just referring to simply doing something along the lines of this in your current build:
(likely mounting & calling a sh file instead of passing individual commands)--- Although I do think getting the ggml guys to support Conan (or monkey patching your own llama conanfile in before building) might be an easier route. | ||||||||
|