Remix.run Logo
Llama-Factory: Unified, Efficient Fine-Tuning for 100 Open LLMs(github.com)
130 points by jinqueeny 5 days ago | 19 comments
kelsey98765431 5 days ago | parent | next [-]

FYI it also supports pre-training, reward model training and RL, not just fine tuning (sft). My team built a managed solution for training that runs on top of llama factory and it's quite excellent and well supported. You will need pretty serious equipment to get good results out of it, think 8xh200. For people at home i would look at doing an sft of gemma3 270m or maybe a 1.6b qwen3, but keep in mind you have to have the dataset in memory as well as the model and kv-cache. cheers

spagettnet 4 days ago | parent | next [-]

depends ln your goals of course. but worth mentioning there are plenty of narrowish tasks (think text-to-sql, and other less general language tasks) where llama8b or phi-4 (14b) or even up to 30b with quantization can be trained on 8xa100 with great results. plus these smaller models benefit from being able to be served on a single a100 or even L4 with post training quantization, with wicked fast generation thanks to the lighter model.

on a related note, at what point are people going to get tired of waiting 20s for an llm to answer their questions? i wish it were more common for smaller models to be used when sufficient.

zwaps 3 days ago | parent | prev [-]

Why do you have to keep the dataset in memory? We had good distributed streaming datasets for a good while now, no?

Ambix 6 hours ago | parent | prev | next [-]

I've used this meta framework for LLM tuning, it really one of the best out there.

Twirrim 5 days ago | parent | prev | next [-]

https://llamafactory.readthedocs.io/en/latest/

I found this link more useful.

"LLaMA Factory is an easy-to-use and efficient platform for training and fine-tuning large language models. With LLaMA Factory, you can fine-tune hundreds of pre-trained models locally without writing any code."

yunohn 4 days ago | parent [-]

Is it a bug or are most documentation pages only available in ZH-CN and not EN?

tempodox 4 days ago | parent [-]

I’d say documentation that is only readable for those who know Chinese is a bug. You could open an issue to ask for translation if there isn’t one yet.

metadat 5 days ago | parent | prev | next [-]

This reminds me conceptually of the Nvidia NIM factory where they attempt to optimize models in bulk / en-masse.

https://www.nvidia.com/en-us/ai/nim-for-manufacturing/

Word on the street is the project has yielded largely unimpressive results compared to its potential, but NV is still investing in an attempt to further raise the GPU saturation waterline.

p.s. This project logo stood out to me at presenting the Llama releasing some "steam" with gusto. I wonder if that was intentional? Sorry for the immature take but stopping the scatological jokes is tough.

stefanwebb 4 days ago | parent | prev | next [-]

There’s a similar library that also includes data synth and LLM-as-a-Judge: https://github.com/oumi-ai/oumi

BoorishBears 4 days ago | parent [-]

Yet another framework lying about Deepseek support.

I've been trying to actually finetune Deepseek (not distills) and there are few options

3abiton 4 days ago | parent [-]

Which version were you trying? Doesn't unsloth already support finetuning?

BoorishBears 4 days ago | parent [-]

Previous V3 base

Unsloth doesn't have an official multi-GPU story: there's hacked together solutions but they're finicky as it is for smaller models

In general Deepseek has very few resources on finetuning, that get even further muddied by people referring to the distills when they claim to be finetuning it.

edd25 4 days ago | parent | prev | next [-]

This looks awesome! I've been struggling fine tuning using Discord messages from my server (for memes), issues with CUDA mostly. Will defo try this out!

On a side note, has anyone tried something similar? I have 100K messages and want to make a "dumb persona" which reflects the general Discord server vibe. I don't really care if it's accurate. What models would be most suitable for this task? My setup is not that powerful: 4070S, 32GB of RAM for training, Lenovo M715q for running with, Ryzen 5 PRO 2400GE, 16GB of memory.

sabareesh 4 days ago | parent | prev | next [-]

This is great,but most work is involved in curating the dataset and the objective functions for RL.

tensorlibb 5 days ago | parent | prev | next [-]

This is incredible! What gpu configs, budget to ultra high-end, would you recommend for local fine tuning?

Always curious to see what other ai enthusiasts are running!

spagettnet 4 days ago | parent [-]

axolotl is great on consumer hardware.

hall0ween 5 days ago | parent | prev | next [-]

are there any use cases, aside from code generation and formatting, where fine-tuning consistently useful?

clipclopflop 5 days ago | parent [-]

Creating small, specialized models for specific tasks. Being able to leverage the up front training/data as a generalized base allows you to quickly create a small local model that can generate outputs for that task that can come close to or match the same you would see in a large/hosted model.

jcuenod 5 days ago | parent | prev [-]

Can you compare this to Unsloth?