Remix.run Logo
minimaxir 6 hours ago

I just finished my Flux 2 testing (focusing on the Pro variant here: https://replicate.com/black-forest-labs/flux-2-pro). Overall, it's a tough sell to use Flux 2 over Nano Banana for the same use cases, but even if Nano Banana didn't exist it's only an iterative improvement over Flux 1.1 Pro.

Some notes:

- Running my nuanced Nano Banana prompts though Flux 2, Flux 2 definitely has better prompt adherence than Flux 1.1, but in all cases the image quality was worse/more obviously AI generated.

- The prompting guide for Flux 2 (https://docs.bfl.ai/guides/prompting_guide_flux2) encourages JSON prompting by default, which is new for an image generation model that has the text encoder to support it. It also encourages hex color prompting, which I've verified works.

- Prompt upsampling is an option, but it's one that's pushed in the documentation (https://github.com/black-forest-labs/flux2/blob/main/docs/fl...). This does allow the model to deductively reason, e.g. if asked to generate an image of a Fibonacci implementation in Python it will fail hilariously if prompt sampling is disabled, but get somewhere if it's enabled: https://x.com/minimaxir/status/1993361220595044793

- The Flux 2 API will flag anything tangently related to IP as sensentive even at its lowest sensitivity level, which is different from Flux 1.1 API. If you enable prompt upsampling, it won't get flagged, but the results are...unexpected. https://x.com/minimaxir/status/1993365968605864010

- Costwise and generation-speed-wise, Flux 2 Pro is on par with Nano Banana, and adding an image as an input pushes the cost of Flux 2 Pro higher than Nano Banana. The cost discrepancy increases if you try to utilize the advertised multi-image reference feature.

- Testing Flux 1.1 vs. Flux 2 generations does not result in objective winners, particularly around more abstract generations.

loudmax 3 hours ago | parent | next [-]

The fact that you have the possibility of running Flux locally might be enough of an argument to sway the balance for some cases. For example, if you've already set up a workflow and Google jacks up the price, or changes the API, you have no choice but to go along. If BFL does the same, you at least have the option of running locally.

minimaxir 3 hours ago | parent [-]

Those cases imply commercial workflows that are prohibited with the open-weights model without purchasing a license.

I am curious to see how the Apache 2.0 distilled variant performs but it's still unlikely that the economics will favor it unless you have a specific niche use case: the engineering effort needed to scale up image inference for these large models isn't zero cost.

vunderba 6 hours ago | parent | prev | next [-]

I've re-run my benchmark with the Flux 2 Pro model and found that in some cases the higher resolution models (I believe Flux 2 Pro handles 4k) can actually backfire on some of the tests because it'll introduce the equivalent of an almost ESRGAN style upscale which may add in unwanted additional details. (See the Constanza test in particular).

https://genai-showdown.specr.net/image-editing

minimaxir 5 hours ago | parent [-]

That Constanza test result is baffling.

vunderba 4 hours ago | parent [-]

Agreed - I was quite surprised. Even though its a bog-standard 1024x1024 image, the somewhat low quality nature of a TV still provides for an interesting challenge. All the BFL models (Kontext Max and Flux 2 Pro) seemed to struggle hard with it.

6 hours ago | parent | prev | next [-]
[deleted]
babaganoosh89 4 hours ago | parent | prev [-]

Flux 2 Dev is not IP censored

minimaxir 3 hours ago | parent [-]

Do you have generations contradicting that? The HF repo for the open-weights Flux 2 Dev says that IP filters are in place (and imply it's a violation of the license to do as such)

EDIT: Seeing a few generations on /r/StableDiffusion generating IP from the open weights model.