| ▲ | vunderba 4 hours ago | |||||||||||||||||||||||||||||||
I've done some preliminary testing with Z-Image Turbo in the past week. Thoughts - It's fast (~3 seconds on my RTX 4090) - Surprisingly capable of maintaining image integrity even at high resolutions (1536x1024, sometimes 2048x2048) - The adherence is impressive for a 6B parameter model Some tests (2 / 4 passed): Personally I find it works better as a refiner model downstream of Qwen-Image 20b which has significantly better prompt understanding but has an unnatural "smoothness" to its generated images. | ||||||||||||||||||||||||||||||||
| ▲ | amrrs 3 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||
On fal, it takes less than a second many times. https://fal.ai/models/fal-ai/z-image/turbo/api Couple that with the LoRA, in about 3 seconds you can generate completely personalized images. The speed alone is a big factor but if you put the model side by side with seedream and nanobanana and other models it's definitely in the top 5 and that's killer combo imho. | ||||||||||||||||||||||||||||||||
| ▲ | echelon 4 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||
So does this finally replace SDXL? Is Flux 1/2/Kontext left in the dust by the Z Image and Qwen combo? | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||