| ▲ | 542458 8 hours ago | |||||||||||||||||||||||||||||||||||||||||||
> Run FLUX.2 [dev] on GeForce RTX GPUs for local experimentation with an optimized fp8 reference implementation of FLUX.2 [dev], created in collaboration with NVIDIA and ComfyUI. Glad to see that they're sticking with open weights. That said, Flux 1.x was 12B params, right? So this is about 3x as large plus a 24B text encoder (unless I'm misunderstanding), so it might be a significant challenge for local use. I'll be looking forward to the distill version. | ||||||||||||||||||||||||||||||||||||||||||||
| ▲ | minimaxir 8 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||
Looking at the file sizes on the open weights version (https://huggingface.co/black-forest-labs/FLUX.2-dev/tree/mai...), the 24B text encoder is 48GB, the generation model itself is 64GB, which roughly tracks with it being the 32B parameters mentioned. Downloading over 100GB of model weights is a tough sell for the local-only hobbyists. | ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||
| ▲ | 5 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||
| [deleted] | ||||||||||||||||||||||||||||||||||||||||||||